Dvorak recently blogged about a particularly nasty outage where various systems went offline due to a problem with MSFT's WGA.
Dvorak's argument seems to be something along the lines of how can you trust someone else's servers which is ridiculous because everyone who has a phone or a connection to the Internet trusts someone else's servers.
I've been around this block a few times and everything old is new again. Today's technology trends are just a reaction to yesterday's pain. Centralized versus distributed computing and licensing versus subscribing to software are the two trends that he is talking about.
With regards to centralized versus distributed computing, it comes down to this. Do you wish to incur the extra scalability costs, that are a part of the TCO for any particular application, all at once or over time? If you chose all at once, then pick a centralized system. If you chose over time, then pick a distributed system.
With regards to licensing versus subscribing, it comes down to this. Do you wish to incur the extra IT costs, that are a part of the TCO for any particular application, all at once or over time? If you chose all at once, then license the software. If you chose over time, then subscribe to the software.
Not that trust isn't an issue. Before subscribing to a service, you should ask yourself whether or not you trust that company to deliver on its promises over time. You should ask that very same question whenever you purchase any product of service from a company, however.
No decision is perfect. There will be advantages and disadvantages any way you go. Just do your best trying to figure out which way is best for you. After that, make your decision and stick with it. Don't look back.
Tuesday, August 28, 2007
Friday, August 10, 2007
Putting the Money Where the Mouth Is
About a year ago, I was talking with the founders of an under-the-radar startup whose name and identifying details need not be mentioned here. At that time the technical founder was quite jazzed on a web application platform and development language called Ruby on Rails. I checked back in with them recently to discover that they had changed their tune and were now looking for both Ruby and Java developers. As business application software development is my field, I was quite curious to discover what had happened in that year's time. The technical founder explained that they needed Java for some interoperability reasons that he did not go into detail about. The most significant reason, however, was that they were having a lot of trouble finding Ruby developers for hire.
I should mention that they are located in the Bay Area of California which is, IMHO, ground zero for cutting edge, innovative, software development. I was shocked to hear that they were having trouble finding Ruby developers there since I hear a lot of enthusiasm for Ruby, especially coming from the Bay Area. What happened?
The other day, I was having lunch with a peer who works as a software architect in a different ISV (not a competitor). He is a very vocal and strong proponent for Ruby. I told him the story and asked him if he would take a Ruby job. He thought about it for a moment and then declined. It turns out that when career, professional software developers choose to learn about and endorse a new development platform, they make that choice based on technical merit. When those same developers make a career choice, however, technology doesn't figure very prominently in the decision making process.
There is some irony here as a common complaint amongst software developers is that management doesn't make good decisions because they don't sufficiently evaluate the technical merit or impact of their decisions.
I find it strangely curious that career professionals would spend time learning and endorse a technology that they had no intention of pursuing professionally. It takes both time and mind, limited resources, to learn a new application stack. When a software developer choses to learn a new application stack, then he or she is making an investment. Doesn't it make sense to expect a return on your investment? Of course, learning itself, is also a valuable thing; therefore you do get some return on your investment. Wouldn't it be better if you maximized your return on investment by learning technology that was cool and marketable?
I, personally, am not saying that Ruby is unmarketable. I, personally, don't need a deep pockets, high profile software infrastructure company to spend bazillions of marketing dollars before I judge a technology as marketable (although it doesn't hurt). What I am saying is that, apparently, there is a non-trivial number of developers who endorse technology that they believe to lack sufficient market share for them to add it to their resume.
Why would that be risky? The more irrelevant technology is on your resume, the less marketable you are. Time and mind aren't the only limited resources for a professional software developer. Resume column inches is another.
Maybe they endorse Ruby hoping that enough people will endorse it so that it becomes marketable but that the watershed moment just hasn't happened yet. Maybe that early enthusiasm was based more on potential and promise than on actual delivery. There has been some performance issues with RoR, enough to prompt MSFT to weigh in with their own version.
A marketplace is a place where buyers and sellers come together to exchange different forms of capital. If something isn't perceived as worthy of buying or selling, then it doesn't have a place in the marketplace. The more perceived worth by buyers and sellers, the more buyers and sellers who perceive worth, the bigger the place it takes up in the marketplace.
Software developers, if you like Ruby enough for it to be marketable, then you are going to have to be willing to do Ruby work for pay and to put it on your resume. You may have to take on the extra risk of being an early adopter because followers aren't going to anywhere without leaders.
I'd like to get more feedback from other developers. What's your take on this? Is Ruby marketable today? Would you take a Ruby job now? Why or why not?
I should mention that they are located in the Bay Area of California which is, IMHO, ground zero for cutting edge, innovative, software development. I was shocked to hear that they were having trouble finding Ruby developers there since I hear a lot of enthusiasm for Ruby, especially coming from the Bay Area. What happened?
The other day, I was having lunch with a peer who works as a software architect in a different ISV (not a competitor). He is a very vocal and strong proponent for Ruby. I told him the story and asked him if he would take a Ruby job. He thought about it for a moment and then declined. It turns out that when career, professional software developers choose to learn about and endorse a new development platform, they make that choice based on technical merit. When those same developers make a career choice, however, technology doesn't figure very prominently in the decision making process.
There is some irony here as a common complaint amongst software developers is that management doesn't make good decisions because they don't sufficiently evaluate the technical merit or impact of their decisions.
I find it strangely curious that career professionals would spend time learning and endorse a technology that they had no intention of pursuing professionally. It takes both time and mind, limited resources, to learn a new application stack. When a software developer choses to learn a new application stack, then he or she is making an investment. Doesn't it make sense to expect a return on your investment? Of course, learning itself, is also a valuable thing; therefore you do get some return on your investment. Wouldn't it be better if you maximized your return on investment by learning technology that was cool and marketable?
I, personally, am not saying that Ruby is unmarketable. I, personally, don't need a deep pockets, high profile software infrastructure company to spend bazillions of marketing dollars before I judge a technology as marketable (although it doesn't hurt). What I am saying is that, apparently, there is a non-trivial number of developers who endorse technology that they believe to lack sufficient market share for them to add it to their resume.
Why would that be risky? The more irrelevant technology is on your resume, the less marketable you are. Time and mind aren't the only limited resources for a professional software developer. Resume column inches is another.
Maybe they endorse Ruby hoping that enough people will endorse it so that it becomes marketable but that the watershed moment just hasn't happened yet. Maybe that early enthusiasm was based more on potential and promise than on actual delivery. There has been some performance issues with RoR, enough to prompt MSFT to weigh in with their own version.
A marketplace is a place where buyers and sellers come together to exchange different forms of capital. If something isn't perceived as worthy of buying or selling, then it doesn't have a place in the marketplace. The more perceived worth by buyers and sellers, the more buyers and sellers who perceive worth, the bigger the place it takes up in the marketplace.
Software developers, if you like Ruby enough for it to be marketable, then you are going to have to be willing to do Ruby work for pay and to put it on your resume. You may have to take on the extra risk of being an early adopter because followers aren't going to anywhere without leaders.
I'd like to get more feedback from other developers. What's your take on this? Is Ruby marketable today? Would you take a Ruby job now? Why or why not?
Monday, August 6, 2007
Renaissance Software Development
I recently read a blog post called A Guide to Hiring Programmers: The High Cost of Low Quality in which Frank Wiles makes the case, when staffing a software development project, to hire good coders even if it means sacrificing everything else. Serendipitously enough, I also read a book called The Business of Software by Erik Sink in which the choice, at least for small ISVs, is to hire developers instead of programmers. Developers code but they also can do other things such as design, model, and interview customers in order to capture specifications or defects. I am interested in these different styles or philosophies in software development because of my long history of experience in the field.
Frank advises not to even bother with the requirement that they are familiar with the target programming language or application platform stack. He also advises to let them telecommute 100% of the time if they want to. His reason for all of this is the claim that a good quality programmer will be 10 times more productive than an average coder. My guess is that the big boost in productivity will come once they learn the environment, etc.
Erik makes the case that a good, well-rounded developer will advance the project faster than a hot code specialist. If you had to choose between an average developer and an excellent programmer, then you should pick the developer.
It's really the choice between generalists and specialists which, really, is no choice at all. A project full of specialists may produce something brilliant but it is unlikely to advance the stated goals of the organization. A project filled with generalists might produce something completely on target but with a very lackluster execution. I say, let's embrace the genius of the and over the tyranny of the or and staff the project with both. Use the generalists to keep the project nimble and on target. Use the specialists to quickly solve any advanced technical problems that will arise. You wouldn't play chess with all bishops or send out nothing but running backs on to the football field. Why would you do the same thing in software development?
Frank advises not to even bother with the requirement that they are familiar with the target programming language or application platform stack. He also advises to let them telecommute 100% of the time if they want to. His reason for all of this is the claim that a good quality programmer will be 10 times more productive than an average coder. My guess is that the big boost in productivity will come once they learn the environment, etc.
Erik makes the case that a good, well-rounded developer will advance the project faster than a hot code specialist. If you had to choose between an average developer and an excellent programmer, then you should pick the developer.
It's really the choice between generalists and specialists which, really, is no choice at all. A project full of specialists may produce something brilliant but it is unlikely to advance the stated goals of the organization. A project filled with generalists might produce something completely on target but with a very lackluster execution. I say, let's embrace the genius of the and over the tyranny of the or and staff the project with both. Use the generalists to keep the project nimble and on target. Use the specialists to quickly solve any advanced technical problems that will arise. You wouldn't play chess with all bishops or send out nothing but running backs on to the football field. Why would you do the same thing in software development?
Subscribe to:
Posts (Atom)