algorithms, data science, data science career, Hadoop, machine learning, Uncategorized

A Pilgrim’s Progress #2: The Data Science Tool Kit

The is the second post about becoming a computer scientist after a career in software engineering. The first part may be found here.

Only a student would think that software developers mostly write computer programs. Coding is a blast–it’s why you get into the field–but the great majority of professional programming time isn’t spent coding. It goes into the processes and tools that allow humans to work together, such a version control and Agile procedures; maintenance; testing and bug fixing; requirements gathering, and documentation. It’s hard to say where writing interesting code is on that list. Probably not third place. Fourth or fifth perhaps?

Linear PCA v nonlinear Principle Manifolds Андрей Зиновьев=Andrei Zinovyev

Fred Brooks famously showed that the human time that goes into a line of code is inversely-quadratic in the size of the project (I’m paraphrasing outrageously.) Coding gets the glory, but virtually all of the significant advances in software engineering since Brooks wrote in the mid-1970’s have actually been in the technology and management techniques for orchestrating the efforts of dozens or even hundreds of people to cooperatively to write a mass of code that might have the size and complexity of War and Peace. That is, if War and Peace were first released as a few chapters, and had to continue to make sense as the remaining 361 chapters come out over a period of many months or even years. Actually, War and Peace runs about half a million words, or 50,000 lines, which would make it quite a modest piece of software. In comparison, the latest Fedora Linux release has 206 million lines of code. A typical modern car might have 150 million. MacOS has 85 million. In the 1970’s four million lines was an immense program.

Continue reading
Standard