« First « Previous Comments 13 - 52 of 71 Next » Last » Search these comments
It's different than learning about algorithms in isolation.
Actually, this type of course is training the mind of an engineer. In junior high Algebra, we knew that quadratic functions grew faster than linear ones, however, it's here where we find it why that's significant.
Teaching everyone to code is delusional
90% of professional software developers in the U.S. can't code properly. Why not raise the standards for software development instead of just trying to get more warm bodies into the field? The cheaper it becomes to "train" -- and I use that term loosely -- a developer the crapper the developer will be.
Writing software is like writing a symphony. It is an inherently creative and original task. There's no point in writing software that has already been written because it's cheaper just to use the existing software. That means any software worth writing does something never done before. Therefore, software writing is inherently a skilled profession and requires talent.
Attempts to get everyone to be coders is simply an attempt to dumb down coding -- hell, even the word "coding" is a demeaning term used to trivialize the talent and skill required to write good software.
Yes, teaching software development could instead be used to smarten up people, but first the teachers and the student must actually respect the sophistication of software development. You would think this would be easy when the impact of software on our society has been greater than anything else, maybe save hardware, including cars and religion. Software has fundamentally changed the way people work, play, communicate, enforce laws, research technologies, purchase goods and services, listen to music, and pretty much everything else in life.
Today, if I were in the 7th/8th grade, instead of wasting this winter break, I'd be on youtube all day, soaking up the material. Sure, some notations and concepts, like diagramming recursion trees, etc, may throw me, but hey, all that stuff is recorded and can be reviewed. I can easy cross reference to other courses, if those adjacent components are useful like statistics and probability.
I totally agree wit you.
But I wasn't really thinking only of the most gifted and self motivated students. I was thinking about the value of programming classes for your sort of average to somewhat above average college student.
Yes, if motivated, they might be able to do god knows what on their own, and there are vast wonderful resources out there these days.
I'm not in disagreement with you. But rather making a different point.
That lecture and other MIT lectures are something I would view, even now for pleasure, if I get through letters of recommendation and other piled up busy work I have to do while on break.
Writing software is like writing a symphony.
Yes, this is why I'd brought up music, as a comparison. It's basically fusing the talents of a musician with that of a trained engineer or scientist.
I was thinking about the value of programming classes for your sort of average to somewhat above average collage student.
The thing is that the science and engineering fields, in general, are not for the average to above average kids. For the most part, what ordinary people learn to do is write a few Excel macros and work with a few business tools like Powerpoint and so forth. That's generally enough to work in a back office for some public organization or private firm.
In general, the average kid will seldom get those technical internships/CO-OPs at Exxon-Mobil, United Technologies, Google, etc. And so in reality, the actual trajectories for STEM grads are already an elite outplacement. Typically speaking, no junior year internship means no future job placement. This is why you see so many STEM grads starting in sales support because there really aren't enough R&D type of jobs for them, at least not in America.
THe question I was trying to consider, which may even be different from the author of the OPs article was:
With the radically changing work environment, that we can't really even predict say 10 years out, does it make sense for some programming classes to be part of a basic curriculum, much as say Biology or precalculus is today ?
Possibly even in high school ?
The goal would not be to dumb down the profession. IT would be a way with real life applications to get students involved in thinking about the logic and very basic programming structures.
LEt's face it, a lot of what is often referred to as programming these days, isn't anything like software engineering. It's more like using a somewhat sophisticated application, but there is still some basic programming logic involved. This could be writing simplke SQL queries, or it might be using one the new fangled high end (large scale) website development packages. I little programming background would come in handy.
In general, the average kid will seldom get those technical internships/CO-OPs at Exxon-Mobil, United Technologies, Google, etc. And so in reality, the actual trajectories for STEM grads are already an elite outplacement. Typically speaking, no junior year internship means no future job placement. This is why you see so many STEM grads starting in sales support because there really aren't enough R&D type of jobs for them, at least not in America.
Again, missing my point entirely. But that's okay.
We talk about all the various types of robots and smart systems that will be in the average work place.
With the radically changing work environment, that we can't really even predict say 10 years out, does it make sense for some programming classes to be part of a basic curriculum, much as say Biology or precalculus is today?
Possibly even in high school ?
With the radically changing work environment, that we can't really even predict say 10 years out, does it make sense for some programming classes to be part of a basic curriculum, much as say Biology or precalculus is today ?
I'd say that a business analyst type of course would be useful.
This may consist of a MySQL database with a basic sales/inventory type of system. And then, the use of let's say a Visual Studio (or some other UI utility) to access this system, both as an end user and as customization programmer analyst.
All and all, it's simple, in the sense that it's about tracking customers and the widgets they'd brought but at the same time, allows a person to learn about writing SQL or business objects to access the database, as well as modifying the look and feel for the end user.
Some of the assignments may include adding a general ledger feature as well as targeting the high paying customers for rewards and benefits. These are all business concepts and apply to the world in general, not just for STEM fields.
Most students take a certain amount of Math, Science, History, Literature as part of what is currently deemed to be part of a well rounded general education. All of these may or may not pertain directly to what the students later do for work.
My hypothesis is that maybe some logic courses or even basic programming should be added to the mix, to account for EXTREMELY recent and ongoing impact of computers in all areas of our lives. What the hell, everyone has a computer in their pocket at this point.
This is a far cry and nothing like suggesting that everyone should be trying to get an engineering job at google.
My hypothesis is that maybe some logic courses or even basic programming should be added to the mix, to account for EXTREMELY recent and ongoing impact of computers in all areas of our lives.
I think that my business systems analyst course would fulfill your goals, without the subject being too techie and related to some high end undergraduate engineering analysis.
The idea in my course is to present the computer as a tool for business, not the computer itself. Thus, the coursework highlights the technical aspects w/o it being the focus of the content.
I think by "glue" he meant for example the way that java code connects to an Oracle data base on one end, and some totally other language API on the other end. Those are those specialist skills that one doesn't learn in college.
That is one example. Another one would be the communication between mobile apps and the server. Or even the interactions between logical data storage on different processes.
They should teach the merit of a well-designed type system. That art is long lost.
Static-typing in object-procedural languages like C/C++/Java is tedious but inadequate.
On the other hand, dynamic-typing is too dangerous.
Also, anyone who cannot pick up a new language in two weeks should not be a programmer.
Also, anyone who cannot pick up a new language in two weeks should not be a programmer.
A number of accountants and tax types have made a living, simply programming Excel macros for their work.
Also, anyone who cannot pick up a new language in two weeks should not be a programmer.
A number of accountants and tax types have made a living, simply programming Excel macros for their work.
They make money as accountants and tax types, not as programmers. :-)
Hey Rin, have you heard of a TV show called You're the Worst? It is sexy and funny. I think you might like it.
Static-typing in object-procedural languages like C/C++/Java is tedious but inadequate.
C and C++ suck, but I have found no problems or limitations with the strong typing by Java or .NET. Both languages have a rich reflection API that make them far more dynamic than any so-called dynamic languages.
It has been my observation that any time a developer had problems figuring out what type of object he has or has to create, the developer really had a far more fundamental problem of not understanding the software he was writing or the API he was using.
I have yet to see a disadvantage of strongly typed language. And, no, performance isn't even one of them, but if it were, it would still be a damn good trade-off. One of the biggest difference between a good programmer and a crappy one is that the crappy one thinks shaving 10% off an algorithm is an improvement and the good one knows that scalability is what really counts, going from O(n^2) to O(n). Premature optimization is the root of all evil.
Static-typing in object-procedural languages like C/C++/Java is tedious but inadequate.
You can use Scala to craft higher-kinded types/functions. Java is rock-solid and (esp. with Java 8 embracing more functional concepts) given the mediocrity of the average programmer an adequate choice. Groovy is just plain slick and node.js is here to stay, so better be a polyglot these days.
Don't get me wrong, I love strongly-typed languages. However, I would like to see a rich type system such as the one in ML or F#. Rich typing does wonders in enforcing program correctness.
There is no type inference in C/C++/C#/Java and it is tedious to repeat type declaration everywhere. Scala is much better at this. However, sometimes it is hard to avoid covariance/contravariance oddities in the generics.
I was an early adopter of Groovy/Grails (2008). Back in those days, it was hard to get things done without digging into framework source code.
Scala is very close to being an ideal object-oriented language. It has so much potential. Unfortunately, it is too complex. Is there a stable IDE yet?
It has been my observation that any time a developer had problems figuring out what type of object he has or has to create, the developer really had a far more fundamental problem of not understanding the software he was writing or the API he was using.
Well, the duck-typing folks have a different view on this. They worship an *extreme* form of TDD and spend much time writing trivial tests like method invocation.
There is no type inference in C/C++/C#/Java and it is tedious to repeat type declaration everywhere. Scala is much better at this.
Sure, there is room for syntactical improvement, but typing has never been the bottleneck of programming. Debugging is. So it's a bad tradeoff to accept hours of debugging a problem that would not even occur if the code is clearer even if that means being a bit more verbose.
However, sometimes it is hard to avoid covariance/contravariance oddities in the generics.
Whenever there is a problem with variance, it is because the developer is doing something wrong. Usually it's an old mistake that has become habit.
Is there a stable IDE yet?
I don't know. I've been doing .NET mostly the past few years. However, it's most likely that you'd get the best experience from Eclipse, which is an awesome IDE. I don't know if anyone has written a commercial grade and free Eclipse plugin for Scala, but hey, you could be the first. Eclipse can compile anything if you give it a compiler that follows the Eclipse plugin model. Eclipse is highly extensible.
Personally, I've always felt that the weak point in software development has always been the software between the ears. Good developers are exponentially better than mediocre ones because good developers are clear thinkers who can build good mental models of the solution they are trying to build before writing a single line of code.
So it's a bad tradeoff to accept hours of debugging a problem that would not even occur if the code is clearer even if that means being a bit more verbose.
It is fine with simple classes but type inference is absolutely necessary when you allow the type to be a function of a double and an integer returning a String. :-)
Type inference allows safety and readability to coexist. Also, a rich type system can detect many potential issues statically.
Whenever there is a problem with variance, it is because the developer is doing something wrong. Usually it's an old mistake that has become habit.
Usually it happens in API design when usage is hard to anticipate.
.Net has better support of generics than Java.
However, it's most likely that you'd get the best experience from Eclipse, which is an awesome IDE.
I hate Eclipse. I much prefer IntelliJ. It does have a Scala plugin which I have not tried recently.
Modern languages like Scala can really use a good IDE for things like refactoring and inspection.
Personally, I've always felt that the weak point in software development has always been the software between the ears.
Of course, but Java and C# allow mediocre programmers to be employable and semi-productive. :-)
They would have negative productivity with C/C++.
Is there a stable IDE yet?
IntelliJ is supposed to be the best, but Eclipse has actually quite matured by now. Luna with the Scala IDE is bearable, which is definitely a big step. Scala is not easy for IDEs to adapt, the compiler still takes quite a while although incremental compiling cuts down on it, and sbt still sucks. I prefer old-school Maven 3 for project and dependency management, or grade/Ivy.
Scala is not easy for IDEs to adapt, the compiler still takes quite a while although incremental compiling cuts down on it, and sbt still sucks.
Yeah, I bet it is a monster to implement tools for. Ironically, Scala is much less compelling without a good IDE.
It is fine with simple classes but type inference is absolutely necessary when you allow the type to be a function of a double and an integer returning a String. :-)
Type inference allows safety and readability to coexist. Also, a rich type system can detect many potential issues statically.
Type inference is just syntactical sugar. It doesn't change the rules of how things are called and what gets passed to methods.
In my opinion, type inference just makes code less readable. For example,
var age = CalculateAge(subject);
may be shorter than the following
DogAge age = CalculateAge <Dog> (subject);
HumanAge age = CalculateAge <Human> (subject);
int age = CalculateAge(subject);
decimal age = CalculateAge(subject);
where
enum DogAge = {puppy, youngAdult, oldAdult};
enum HumanAge = {baby, toddler, child, adolescent, youngAdult, middleAge, senior};
but how is the first line, var age=CalculateAge(subject), more readable other than being shorter. If you don't know what CalculateAge is returning or which overload of CalculateAge is being called, then it's less readable and you are more likely to make a mistake in interpreting the code.
Worse yet are languages that allow the type to be changed like JavaScript. Then you don't know if integer or floating point arithmetic or even string concatenation is being used. Can you tell what the line below prints?
var years = GetYearsSinceEpoch1900(DateTime.Now);
output.Print(years + 1900);
It could just as easily print "1151900" as it could print "2015". Now change the var to int or String and now it's obvious.
I prefer the ideologue of clarity over brevity expressed in the phrase "always be explicit". Scala, from what I've read and watched, is inconsistent in this principle.
Usually it happens in API design when usage is hard to anticipate.
Ah, but API design is pretty much all of programming. When you are calling your own API or your coworkers are or a third party is, software design is essentially API design.
There is a school of thought that says the first time you solve the problem it is just to understand the problem and you should throw away your solution. The second time you solve the problem, you'll have the right solution.
.Net has better support of generics than Java.
How so? I've found them to be about on par.
Of course, but Java and C# allow mediocre programmers to be employable and semi-productive
There are mediocre programmers everywhere. It has been my observation that most tend to stay away from Java and C# and prefer languages like Visual Basic, Perl, JavaScript.
They would have negative productivity with C/C++.
Unless you are doing embedded programming or have to interact with a legacy system, there is no point in doing C or C++. To some degree they were necessary to call unmanaged, OS-native libraries through JNI or "unsafe code" in .NET, but today that's largely unnecessary.
Kernighan and Ritchie C was terrible even in basic syntax. It took ANSI over a decade to finally get ANSI C into a decent language, whose main value was being a high-level version of assembly language, hence it's usefulness when interacting directly with hardware. But unless your doing that, there's no point in C anymore.
There's even less point in C++ as it's a mediocre object-oriented language and it's obsolete. Both Java and .NET are/have much better language and both are managed platforms, which is much more useful.
Finally, there is no reason to take pride in knowing C or C++ instead of their modern counterparts. It doesn't make you a better programmer to write lower-level code. Sure, you have to understand how sorting, graphics, databases, filesystems, etc. work, but there's no point in reinventing the wheel. Writing function pointers in C doesn't make you a guru. And this is coming from someone who not only mastered C and C++ 30+ years ago, but also had no problem writing in 80x86 assembly language.
.NET and Java aren't kiddy toys. They are the predominant platforms of the Internet, and there are damn good reasons why. I have yet to see anything other than low-level hardware code that can be implemented easily or elegantly in any other language that couldn't be implemented easily or elegantly in .NET or Java. (One caveat: String to enum conversion is better in Java.)
Of course, I'm always willing to look at an example.
Why would anyone want to code in this day of offshoring and H1Bs. Sure you need some to clean up the work after them-but its heyday is gone.
And this is coming from someone who not only mastered C and C++ 30+ years ago
I'm afraid to ask, but how old are you?
As a toddler, I was not interested in academia ... just in playing with toys. Then, I'd started reading about history, archeology, etc. My true interest in the sciences, started after the 5th grade.
I'm in my 30s. I wrote my first program at the age of 7 about five minutes into touching a computer for the first time.
.Net has better support of generics than Java.
How so? I've found them to be about on par.
Their implementation is entirely different, C# does not erase the runtime generic types and allows for full runtime type introspection while Java is using type erasure for compatibility reasons and thus type-casts at runtime. Syntax-wise I find the Java wildcard style for expressing upper and lower bounds more elegant.
.NET and Java aren't kiddy toys. They are the predominant platforms of the Internet, and there are damn good reasons why. I have yet to see anything other than low-level hardware code that can be implemented easily or elegantly in any other language that couldn't be implemented easily or elegantly in .NET or Java. (One caveat: String to enum conversion is better in Java.)
WRT stability this is certainly true, but you can achieve similar or better stability with less code for example using scala, where you can mix Java syntax and more advanced functional concepts to significantly shorten the code and, more importantly, make it entirely immutable/stateless, which reduces bugs/side-effects. The drawback is that if you get too fancy your readability may suffer with scala and if you have attrition and suddenly a significant bug shows up, then you will have a hard time finding developers than can pick up, maintain and improve legacy code.
I'm in my 30s. I wrote my first program at the age of 7 about five minutes into touching a computer for the first time.
I see, I guess I was more a history buff, as a kid, reading about the Revolutionary War, General MacArthur, Ancient Egypt, etc, before anyone else I knew was into those subjects.
It was watching Star Wars, on video, when my interest in science was piqued. Then, I'd started reading about chemistry, physics, etc. I should have started with computers, since those were already invented. The lightsaber would have to wait for another century or two.
Agreed. Enums are more efficient at making code readable...ie:
enum SBH = {shiteater, shitwiper, eaterofshit, posterofshitpix, enthralledwithshit, lovesshityeteternallyconstipated};
where
enum DogAge = {puppy, youngAdult, oldAdult};
enum HumanAge = {baby, toddler, child, adolescent, youngAdult, middleAge, senior};
There's even less point in C++ as it's a mediocre object-oriented language and it's obsolete. Both Java and .NET are/have much better language and both are managed platforms, which is much more useful.
I am not a fan of C++, but it is a rather extensible language. It has TWO meta-programming facilities: macros and templates.
C++ relies heavily on the libraries for functionality. STL and Boost kept it relevant through time.
Type inference is just syntactical sugar. It doesn't change the rules of how things are called and what gets passed to methods.
True, but a rich type system is much more elaborate. It provides for complex type calculations at compile time. For example, it can enforce Unit of Measurement constraints.
Most type-inference systems allow you to declare the type explicitly. So the clarity is there if you choose. However, since the type of a variable is statically known, you can just leave it out. It would also be trivial for the IDE to annotate full type information.
Scala is promising and popular for big data. Have any of you used the Netbeans IDE? It's pretty easy to use compared to Eclipse. As for coding, it's useful to know for building ones own software not as a career.
.NET and Java aren't kiddy toys. They are the predominant platforms of the Internet, and there are damn good reasons why. I have yet to see anything other than low-level hardware code that can be implemented easily or elegantly in any other language that couldn't be implemented easily or elegantly in .NET or Java. (One caveat: String to enum conversion is better in Java.)
Agreed. Java is just marginally slower. One can easily throw money at machines and save on development/debugging costs.
C++ is too slow for many things. High-frequency trading is one example. You really should be doing FPGA or ASIC with a customized network stack.
Scala is promising and popular for big data. Have any of you used the Netbeans IDE? It's pretty easy to use compared to Eclipse. As for coding, it's useful to know for building ones own software not as a career.
I would stick with IntelliJ.
Have you checked out Kotlin?
http://en.wikipedia.org/wiki/Kotlin_%28programming_language%29
It is not nearly as big as Scala.
Their implementation is entirely different, C# does not erase the runtime generic types and allows for full runtime type introspection while Java is using type erasure for compatibility reasons and thus type-casts at runtime.
Exactly my point. One of damn few differences between C# and Java reflection is type erasure, and that’s utterly insignificant. To call the implementations “totally different†because of type erasure is ridiculous.
I’ve been doing Java programming since 1995 including intensive reflection and there has been only one time when I actually needed the type information included and then it was trivially easy to add that type information by including an instance of java.lang.class. Moreover, this solution was obvious. So I cannot consider the exclusion of type information in the Java class format to be a big deal or a significant difference in the reflection API. You have a simple opt-in mechanism for including type information.
scala
My view on Scala is that it is simply trying to be the next major revision of Java. Personally, I’d prefer if the good parts of Scala were simply rolled into Java 9 or 10. Yeah it would break backwards compatibility, but far less so than having a different platform altogether. That would be far more disruptive than simply stating that Java 10 and up apps have to run on JRE 10 and up. I see no point in having to switch an entire platform rather than rolling a few new features into an established platform even if it means a compatibility break.
Knock knock!
Who's there?
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
Java
1995 called and wants its joke back. It's been 15 years since Java performance has sucked. Maybe you should be examining your code rather than what the compiler produces.
Any reasonable language should not have the concept of null. At least it must have null-safety at compile time.
In programming, every small thing is trivial, but the code will look ugly.
« First « Previous Comments 13 - 52 of 71 Next » Last » Search these comments
http://singularityhub.com/2014/12/28/future-of-work-part-ii-why-teaching-everyone-to-code-is-delusional/