New YouTube Series: Computer things they didn't teach you in school
OK, fine maybe they DID teach you this in class. But, you'd be surprised how many people think they know something but don't know the background or the etymology of a term. I find these things fascinating. In a world of bootcamp graduates, community college attendees (myself included!), and self-taught learners, I think it's fun to explore topics like the ones I plan to cover in my new YouTube Series "Computer things they didn't teach you."
BOOK RECOMMENDATION: I think of this series as being in the same vein as the wonderful "Imposter's Handbook" series from Rob Conery (I was also involved, somewhat). In Rob's excellent words: "Learn core CS concepts that are part of every CS degree by reading a book meant for humans. You already know how to code build things, but when it comes to conversations about Big-O notation, database normalization and binary tree traversal you grow silent. That used to happen to me and I decided to change it because I hated being left out. I studied for 3 years and wrote everything down and the result is this book."
Of course it'll take exactly 2 comments before someone comments with "I don't know what crappy school you're going to but we learned this stuff when they handed us our schedule." Fine, maybe this series isn't for you.
In fact I'm doing this series and putting it out there for me. If it helps someone, all the better!
In this first video I cover the concept of Carriage Returns and Line Feeds. But do you know WHY it's called a Carriage Return? What's a carriage? Where did it go? Where is it returning from? Who is feeding it lines?
What would you suggest I do for the next video in the series? I'm thinking Unicode, UTF-8, BOMs, and character encoding.
Sponsor: Octopus Deploy wanted me to let you know that Octopus Server is now free for small teams, without time limits. Give your team a single place to release, deploy and operate your software.
About Scott
Scott Hanselman is a former professor, former Chief Architect in finance, now speaker, consultant, father, diabetic, and Microsoft employee. He is a failed stand-up comic, a cornrower, and a book author.
About Newsletter
But I have no clue why they didn't teach us anything about Unicode then...
So was that a crappy school ?
Maybe... talk the origin of ".NET". That's your area, right?
Of course, there is always the simpler topic of "why do we refer to the web platform (CSS3, SVG, JavaScript, etc.) as HTML5?" The answer is the phenomenon of metonymy in the English language. It is for the same reason that the term "Win32" is used to refer to the whole body of Microsoft's unmanaged code API.
If you want something more difficult, try this: Why does Microsoft use the metonymic "x86" tag for 32-bit Windows SKUs instead of the official "IA-32", or the technically correct "i386" tag that Microsoft was already using until the time of Windows Vista?
If you're looking for something more technical, maybe the origin of the "DWord"/"QWord" data types.
I would request a video about why floating point numbers in general are imprecise. (IEEE encoding and all). As it's a thing that many developers overlook and wonder why their code comparing their float variables to e.g. 0 doesn't work.
To be honest I can't really explain in detail the difference between 32 and 64-bit architecture. So a deeper dive into that would be interesting to see.
In this first video I cover the concept of Carriage Returns and Line Feeds. But do you know WHY it's called a Carriage Return? What's a carriage? Where did it go? Where is it returning from? Who is feeding it lines?
Am I the only one who thought of "Who's line is it anyway" at that point :D
"Wonderfully concise discussions . . . full of wit . . . It is nearly the perfect book for the noncomputer scientists who want to learn something about the field." --Nature
Example: https://www.youtube.com/watch?v=MijmeoH9LT4
And same about floating point imprecision as suggested in the comments above.
https://www.youtube.com/watch?v=PZRI1IfStY0"
No need to make the same content twice with a different narrator :)
I would personally love some more hardware topics and topics related to the low-level functioning of the web.
What really happens when you open your browser, type a URL and hit enter? Everyone knows the basic DNS query, Http Get, the Web server returns a result and browser renders it stuff but few actually understand and know the history of OSPF protocols and how they actually work. We recently used Dijkstra for an AI-based project so knowing this stuff helps even today in real-world projects.
But then I am a self-thought programmer who didn't go to a formal engineering college so I wouldn't know if that's something they actually teach in schools. But it might still make an interesting topic though.
I had one 3 hour course for 2 semesters in college on computer history covering everything from before 1900 to 1979 which helped immensely.
It helps to tag new technologies like gRPC (2015) as rediscoveries of the original technology - 1981 C language IDL files.
Nestor, William Allan. Wulf, David Alex Lamb, IDL, Interface Description Language, Technical Report, Carnegie-Mellon University, 1981
I'd expect IDL to be based on a much older EDI specification pre-dating ANSI X12.
Boy, if I had a nickel every time I heard that! Love to see and hear a deep-dive on this from you, specially in a format of teaching a kid in a primary school :)
Comments are closed.
When programming a DOS terminal screen if you just put in a line feed, it would move the cursor down to the next line and the next characters would be written there. If a CR was written alone the next character would be written at the start of the line you are on.
It is possible on a manual typewriter to do a carriage return without a line feed by pushing the left of the carriage just below where the metal bar meets the carriage.