It has been a month since I wrote on my favorite open source projects. Some of you have been very kind to point out some cool projects that I missed. I feel like now would be a good time of introduce these worthy projects.
In the original article, I talked about Java and some very nice libraries that extend its functionality and power. One project that I missed was google collections which extends the Java Collection Framework. Though still in alpha, this project is one to watch for.
I talked about the popularity of using IoC/DI which is a lightweight way of building applications from components without introducing of lot of static dependencies thus allowing complex systems to be resilient to change over time. I listed Spring as a technology that empowers this lightweight approach. Another is Google Guice that also allows you to easily build complex object hierarchies from a configuration. The difference between Guice and Spring is the format of the configuration file. For Spring, that is a single external XML file. For Guice, it takes the form of special meta-data comments embedded within the code itself which could be spread across potentially thousands of different files. There are pros and cons to both so which approach appeals to you is a matter of personal choice.
I talked about JUnit which is a unit testing framework. One advantage of IoC/DI is that you can easily plug in a data access tier for the real application that uses a relational database and a mock object tier that return canned responses for the purposes of unit testing. EasyMock is a way to dynamically generate these mock objects.
I can't believe that I completely forgot to mention workflow in the original article (it was in the mind map). This term has been overused so allow me to say that there are two types of workflow, internal workflow and external workflow. External workflow is compelling because it allows for a standardized way to automate the interaction and integration of different systems to provide a complete solution. Internal workflow is an easy way of making your application more flexible and configurable by allowing you to bolt on a workflow engine that empowers easier customization of business processes. There are two parts to workflow, the engine and the editor. The editor is used by workflow architects (if internal) to customize business processes. It is also used by integration specialists (if external) to specify web services that are aggregates of other web services, keeping satellite systems in sync. For internal workflow, the engine is embedded within the application. For external workflow, the engine is a web service that is considered separate from, but dependent on, the other systems. There are a few really good proprietary workflow engines and editors out there but my favorite OSS workflow is YAWL.
I talked about web browser scripting and listed my favorite libraries to help accelerate development. I mentioned how hard java script is to debug but completely forgot to mention my favorite java script debuggers. For Firefox, I use the Venkman debugger. Though not OSS, I should mention that VS.NET has an adequate debugger for java script on Internet Explorer.
Thursday, November 29, 2007
Sunday, October 28, 2007
The Rise of Open Source Software
The Open Source Software movement has been gaining so much momentum within the Information Technology community recently that it is almost impossible to ignore. Technology players like IBM and Google have contributed to open source. Even Microsoft, whose CEO displays open hostility towards OSS, have contributed. Now media companies such as Yahoo and the New York Times have contributed to OSS.
CTOs and other key decision players in any company's IT department no longer consider OSS as an "if" but rather as a "when." It's not so much a decision as to whether or not to embrace OSS but rather to decide what their OSS strategy is and the timeline for executing on that strategy. It is also very likely that OSS is already being used within their organization.
Why is it that OSS is so compelling? It has a large community supporting it and the price/performance ratio can't be beat. I have blogged on this subject in more detail elsewhere.
CTOs and other key decision players in any company's IT department no longer consider OSS as an "if" but rather as a "when." It's not so much a decision as to whether or not to embrace OSS but rather to decide what their OSS strategy is and the timeline for executing on that strategy. It is also very likely that OSS is already being used within their organization.
Why is it that OSS is so compelling? It has a large community supporting it and the price/performance ratio can't be beat. I have blogged on this subject in more detail elsewhere.
Tuesday, October 16, 2007
Children of Men
I saw an interesting video the other day. The story line for Children of Men is a fairly straight up sample from the genre of post-apocalyptic science fiction and that alone makes it worthy to watch if you are into that sort of thing. That is not why I am writing about it here, however. What makes this movie interesting enough to comment on in a blog about contemporary culture is its innovative cinematic approach.
Industries are constantly looking for ways to converge. They do this to open up new markets and to form partnerships that enhance their value proposition without expanding payroll. One of the current hot areas for convergence is between the gaming and the movie industries. I have blogged about all of this before.
What makes the movie Children of Men interesting is in their approach towards convergence with the gaming industry. The cinematography of the movie is very reminiscent with that of the gaming genre known as the First Person Shooter. There are dialog shots in the movie that are filmed very similar to that of cut scenes in a game. In the action shots, they used a steady-cam behind the main actors as they progress through the scene. The effect is very much like being in a team based FPS where you are running behind the next gamer in front of you. You and the actors move through a battle zone trying to avoid being shot at which is a very typical challenge in a FPS style game. In another scene, one of the supporting cast members gets pulled off of a bus and you get to see her grim fate as the bus pulls away. This is a typical atmosphere building sequence that you may find in well made games.
I keep my eyes open for opportunities of convergence because I find it to be fascinating. I have previously noted other ways that the film and gaming industries are converging. What other ways do you see?
Industries are constantly looking for ways to converge. They do this to open up new markets and to form partnerships that enhance their value proposition without expanding payroll. One of the current hot areas for convergence is between the gaming and the movie industries. I have blogged about all of this before.
What makes the movie Children of Men interesting is in their approach towards convergence with the gaming industry. The cinematography of the movie is very reminiscent with that of the gaming genre known as the First Person Shooter. There are dialog shots in the movie that are filmed very similar to that of cut scenes in a game. In the action shots, they used a steady-cam behind the main actors as they progress through the scene. The effect is very much like being in a team based FPS where you are running behind the next gamer in front of you. You and the actors move through a battle zone trying to avoid being shot at which is a very typical challenge in a FPS style game. In another scene, one of the supporting cast members gets pulled off of a bus and you get to see her grim fate as the bus pulls away. This is a typical atmosphere building sequence that you may find in well made games.
I keep my eyes open for opportunities of convergence because I find it to be fascinating. I have previously noted other ways that the film and gaming industries are converging. What other ways do you see?
Friday, October 5, 2007
Interactive Fiction
Interactive Fiction is a literary genre where the reader must participate actively. It is a cross between a book and a game. Everything is in second person and you have to write it with minimal expectations about what has happened before or after any point in the story. This is not easy. I have blogged on this subject twice before.
I was so intrigued by this genre that I tried my hand at it by writing a piece and entering it into a competition. I invite you to read my submission. My piece is called Reconciling Mother.
I was so intrigued by this genre that I tried my hand at it by writing a piece and entering it into a competition. I invite you to read my submission. My piece is called Reconciling Mother.
Tuesday, August 28, 2007
The Pendulum Swings
Dvorak recently blogged about a particularly nasty outage where various systems went offline due to a problem with MSFT's WGA.
Dvorak's argument seems to be something along the lines of how can you trust someone else's servers which is ridiculous because everyone who has a phone or a connection to the Internet trusts someone else's servers.
I've been around this block a few times and everything old is new again. Today's technology trends are just a reaction to yesterday's pain. Centralized versus distributed computing and licensing versus subscribing to software are the two trends that he is talking about.
With regards to centralized versus distributed computing, it comes down to this. Do you wish to incur the extra scalability costs, that are a part of the TCO for any particular application, all at once or over time? If you chose all at once, then pick a centralized system. If you chose over time, then pick a distributed system.
With regards to licensing versus subscribing, it comes down to this. Do you wish to incur the extra IT costs, that are a part of the TCO for any particular application, all at once or over time? If you chose all at once, then license the software. If you chose over time, then subscribe to the software.
Not that trust isn't an issue. Before subscribing to a service, you should ask yourself whether or not you trust that company to deliver on its promises over time. You should ask that very same question whenever you purchase any product of service from a company, however.
No decision is perfect. There will be advantages and disadvantages any way you go. Just do your best trying to figure out which way is best for you. After that, make your decision and stick with it. Don't look back.
Dvorak's argument seems to be something along the lines of how can you trust someone else's servers which is ridiculous because everyone who has a phone or a connection to the Internet trusts someone else's servers.
I've been around this block a few times and everything old is new again. Today's technology trends are just a reaction to yesterday's pain. Centralized versus distributed computing and licensing versus subscribing to software are the two trends that he is talking about.
With regards to centralized versus distributed computing, it comes down to this. Do you wish to incur the extra scalability costs, that are a part of the TCO for any particular application, all at once or over time? If you chose all at once, then pick a centralized system. If you chose over time, then pick a distributed system.
With regards to licensing versus subscribing, it comes down to this. Do you wish to incur the extra IT costs, that are a part of the TCO for any particular application, all at once or over time? If you chose all at once, then license the software. If you chose over time, then subscribe to the software.
Not that trust isn't an issue. Before subscribing to a service, you should ask yourself whether or not you trust that company to deliver on its promises over time. You should ask that very same question whenever you purchase any product of service from a company, however.
No decision is perfect. There will be advantages and disadvantages any way you go. Just do your best trying to figure out which way is best for you. After that, make your decision and stick with it. Don't look back.
Friday, August 10, 2007
Putting the Money Where the Mouth Is
About a year ago, I was talking with the founders of an under-the-radar startup whose name and identifying details need not be mentioned here. At that time the technical founder was quite jazzed on a web application platform and development language called Ruby on Rails. I checked back in with them recently to discover that they had changed their tune and were now looking for both Ruby and Java developers. As business application software development is my field, I was quite curious to discover what had happened in that year's time. The technical founder explained that they needed Java for some interoperability reasons that he did not go into detail about. The most significant reason, however, was that they were having a lot of trouble finding Ruby developers for hire.
I should mention that they are located in the Bay Area of California which is, IMHO, ground zero for cutting edge, innovative, software development. I was shocked to hear that they were having trouble finding Ruby developers there since I hear a lot of enthusiasm for Ruby, especially coming from the Bay Area. What happened?
The other day, I was having lunch with a peer who works as a software architect in a different ISV (not a competitor). He is a very vocal and strong proponent for Ruby. I told him the story and asked him if he would take a Ruby job. He thought about it for a moment and then declined. It turns out that when career, professional software developers choose to learn about and endorse a new development platform, they make that choice based on technical merit. When those same developers make a career choice, however, technology doesn't figure very prominently in the decision making process.
There is some irony here as a common complaint amongst software developers is that management doesn't make good decisions because they don't sufficiently evaluate the technical merit or impact of their decisions.
I find it strangely curious that career professionals would spend time learning and endorse a technology that they had no intention of pursuing professionally. It takes both time and mind, limited resources, to learn a new application stack. When a software developer choses to learn a new application stack, then he or she is making an investment. Doesn't it make sense to expect a return on your investment? Of course, learning itself, is also a valuable thing; therefore you do get some return on your investment. Wouldn't it be better if you maximized your return on investment by learning technology that was cool and marketable?
I, personally, am not saying that Ruby is unmarketable. I, personally, don't need a deep pockets, high profile software infrastructure company to spend bazillions of marketing dollars before I judge a technology as marketable (although it doesn't hurt). What I am saying is that, apparently, there is a non-trivial number of developers who endorse technology that they believe to lack sufficient market share for them to add it to their resume.
Why would that be risky? The more irrelevant technology is on your resume, the less marketable you are. Time and mind aren't the only limited resources for a professional software developer. Resume column inches is another.
Maybe they endorse Ruby hoping that enough people will endorse it so that it becomes marketable but that the watershed moment just hasn't happened yet. Maybe that early enthusiasm was based more on potential and promise than on actual delivery. There has been some performance issues with RoR, enough to prompt MSFT to weigh in with their own version.
A marketplace is a place where buyers and sellers come together to exchange different forms of capital. If something isn't perceived as worthy of buying or selling, then it doesn't have a place in the marketplace. The more perceived worth by buyers and sellers, the more buyers and sellers who perceive worth, the bigger the place it takes up in the marketplace.
Software developers, if you like Ruby enough for it to be marketable, then you are going to have to be willing to do Ruby work for pay and to put it on your resume. You may have to take on the extra risk of being an early adopter because followers aren't going to anywhere without leaders.
I'd like to get more feedback from other developers. What's your take on this? Is Ruby marketable today? Would you take a Ruby job now? Why or why not?
I should mention that they are located in the Bay Area of California which is, IMHO, ground zero for cutting edge, innovative, software development. I was shocked to hear that they were having trouble finding Ruby developers there since I hear a lot of enthusiasm for Ruby, especially coming from the Bay Area. What happened?
The other day, I was having lunch with a peer who works as a software architect in a different ISV (not a competitor). He is a very vocal and strong proponent for Ruby. I told him the story and asked him if he would take a Ruby job. He thought about it for a moment and then declined. It turns out that when career, professional software developers choose to learn about and endorse a new development platform, they make that choice based on technical merit. When those same developers make a career choice, however, technology doesn't figure very prominently in the decision making process.
There is some irony here as a common complaint amongst software developers is that management doesn't make good decisions because they don't sufficiently evaluate the technical merit or impact of their decisions.
I find it strangely curious that career professionals would spend time learning and endorse a technology that they had no intention of pursuing professionally. It takes both time and mind, limited resources, to learn a new application stack. When a software developer choses to learn a new application stack, then he or she is making an investment. Doesn't it make sense to expect a return on your investment? Of course, learning itself, is also a valuable thing; therefore you do get some return on your investment. Wouldn't it be better if you maximized your return on investment by learning technology that was cool and marketable?
I, personally, am not saying that Ruby is unmarketable. I, personally, don't need a deep pockets, high profile software infrastructure company to spend bazillions of marketing dollars before I judge a technology as marketable (although it doesn't hurt). What I am saying is that, apparently, there is a non-trivial number of developers who endorse technology that they believe to lack sufficient market share for them to add it to their resume.
Why would that be risky? The more irrelevant technology is on your resume, the less marketable you are. Time and mind aren't the only limited resources for a professional software developer. Resume column inches is another.
Maybe they endorse Ruby hoping that enough people will endorse it so that it becomes marketable but that the watershed moment just hasn't happened yet. Maybe that early enthusiasm was based more on potential and promise than on actual delivery. There has been some performance issues with RoR, enough to prompt MSFT to weigh in with their own version.
A marketplace is a place where buyers and sellers come together to exchange different forms of capital. If something isn't perceived as worthy of buying or selling, then it doesn't have a place in the marketplace. The more perceived worth by buyers and sellers, the more buyers and sellers who perceive worth, the bigger the place it takes up in the marketplace.
Software developers, if you like Ruby enough for it to be marketable, then you are going to have to be willing to do Ruby work for pay and to put it on your resume. You may have to take on the extra risk of being an early adopter because followers aren't going to anywhere without leaders.
I'd like to get more feedback from other developers. What's your take on this? Is Ruby marketable today? Would you take a Ruby job now? Why or why not?
Monday, August 6, 2007
Renaissance Software Development
I recently read a blog post called A Guide to Hiring Programmers: The High Cost of Low Quality in which Frank Wiles makes the case, when staffing a software development project, to hire good coders even if it means sacrificing everything else. Serendipitously enough, I also read a book called The Business of Software by Erik Sink in which the choice, at least for small ISVs, is to hire developers instead of programmers. Developers code but they also can do other things such as design, model, and interview customers in order to capture specifications or defects. I am interested in these different styles or philosophies in software development because of my long history of experience in the field.
Frank advises not to even bother with the requirement that they are familiar with the target programming language or application platform stack. He also advises to let them telecommute 100% of the time if they want to. His reason for all of this is the claim that a good quality programmer will be 10 times more productive than an average coder. My guess is that the big boost in productivity will come once they learn the environment, etc.
Erik makes the case that a good, well-rounded developer will advance the project faster than a hot code specialist. If you had to choose between an average developer and an excellent programmer, then you should pick the developer.
It's really the choice between generalists and specialists which, really, is no choice at all. A project full of specialists may produce something brilliant but it is unlikely to advance the stated goals of the organization. A project filled with generalists might produce something completely on target but with a very lackluster execution. I say, let's embrace the genius of the and over the tyranny of the or and staff the project with both. Use the generalists to keep the project nimble and on target. Use the specialists to quickly solve any advanced technical problems that will arise. You wouldn't play chess with all bishops or send out nothing but running backs on to the football field. Why would you do the same thing in software development?
Frank advises not to even bother with the requirement that they are familiar with the target programming language or application platform stack. He also advises to let them telecommute 100% of the time if they want to. His reason for all of this is the claim that a good quality programmer will be 10 times more productive than an average coder. My guess is that the big boost in productivity will come once they learn the environment, etc.
Erik makes the case that a good, well-rounded developer will advance the project faster than a hot code specialist. If you had to choose between an average developer and an excellent programmer, then you should pick the developer.
It's really the choice between generalists and specialists which, really, is no choice at all. A project full of specialists may produce something brilliant but it is unlikely to advance the stated goals of the organization. A project filled with generalists might produce something completely on target but with a very lackluster execution. I say, let's embrace the genius of the and over the tyranny of the or and staff the project with both. Use the generalists to keep the project nimble and on target. Use the specialists to quickly solve any advanced technical problems that will arise. You wouldn't play chess with all bishops or send out nothing but running backs on to the football field. Why would you do the same thing in software development?
Monday, July 30, 2007
There is Neither Beauty Nor Mystery in Ignorance
I subscribe to the American Go E-Journal which sends me email periodically about this ancient, venerated strategic board game. In one of their recent emails was an article called SAVING GO: A Modest Proposal to Programmers (Stop!) where Paul Celmer pleads with software developers to stop trying to create a professional Go playing program. His main arguments are that such a program would "devalue human achievement" and that it would "mar the beauty of our game" by solving its mystery. He claims that this is what has happened to both checkers and chess.
Obviously, Mr Celmer isn't worrying about devaluing human programmer achievement, only devaluing human go player achievement. A professional strength go playing program would be a great human achievement in the fields of software engineering and artificial intelligence. Perhaps we should outlaw cars since they might devalue the achievement of human marathon runners.
I don't believe that professional strength chess programs have solved that game by a long shot. It does look like the best chess players are having more and more difficulty keeping up with the best software systems but those systems are out of reach of the standard consumer. Most professional chess players can hammer the typical consumer grade chess game.
At the same time, software has raised the average of chess competency by allowing students of the game to drill against the computer which is available for them at any time. Before that, they had to sharpen their skills against other humans which took longer because both humans had to schedule time to be available to play.
Mr. Celmer really doesn't have a lot to worry about, however. Last year, the journal on Artificial Intelligence, called IEEE Intelligent Systems, published an editorial called Computers Play Chess; Humans Play Go. Editor James Hendler describes the combinatorics of go as dwarfing that of chess. Also, there is a lot more simultaneous, multivariate pattern recognition in go than there is in chess. Even though the rules of go are simpler than that of chess, the playing of go is much more difficult.
It even takes a while to teach a human the concepts of life and death in go. No-one has successfully programed a computer to always be able to recognize such a simple, fundamental concept as that much less more advanced concepts such as thickness, shape, or the direction of development.
There is go playing software that is useful for drilling and for improving your game. My favorite is an open source game called GNU Go. This software is actually only the artificial intelligent player. You still need software that simulates a go board and the playing stones and that can talk to GNU Go. My favorite open source software for that is Panda-glGo. Even the best go software cannot hope to beat the worst professional player.
I have one last nit to pick about Mr. Celmer's plea. In Mr. Celmer's world, beauty can be marred by solving mystery. In my world that can never be. You see, solving one mystery opens our awareness to a thousand more mysteries, thereby increasing its beauty by three orders of magnitude.
Obviously, Mr Celmer isn't worrying about devaluing human programmer achievement, only devaluing human go player achievement. A professional strength go playing program would be a great human achievement in the fields of software engineering and artificial intelligence. Perhaps we should outlaw cars since they might devalue the achievement of human marathon runners.
I don't believe that professional strength chess programs have solved that game by a long shot. It does look like the best chess players are having more and more difficulty keeping up with the best software systems but those systems are out of reach of the standard consumer. Most professional chess players can hammer the typical consumer grade chess game.
At the same time, software has raised the average of chess competency by allowing students of the game to drill against the computer which is available for them at any time. Before that, they had to sharpen their skills against other humans which took longer because both humans had to schedule time to be available to play.
Mr. Celmer really doesn't have a lot to worry about, however. Last year, the journal on Artificial Intelligence, called IEEE Intelligent Systems, published an editorial called Computers Play Chess; Humans Play Go. Editor James Hendler describes the combinatorics of go as dwarfing that of chess. Also, there is a lot more simultaneous, multivariate pattern recognition in go than there is in chess. Even though the rules of go are simpler than that of chess, the playing of go is much more difficult.
It even takes a while to teach a human the concepts of life and death in go. No-one has successfully programed a computer to always be able to recognize such a simple, fundamental concept as that much less more advanced concepts such as thickness, shape, or the direction of development.
There is go playing software that is useful for drilling and for improving your game. My favorite is an open source game called GNU Go. This software is actually only the artificial intelligent player. You still need software that simulates a go board and the playing stones and that can talk to GNU Go. My favorite open source software for that is Panda-glGo. Even the best go software cannot hope to beat the worst professional player.
I have one last nit to pick about Mr. Celmer's plea. In Mr. Celmer's world, beauty can be marred by solving mystery. In my world that can never be. You see, solving one mystery opens our awareness to a thousand more mysteries, thereby increasing its beauty by three orders of magnitude.
Labels:
chess,
combinatorics,
go,
IEEE,
pattern recognition
Tuesday, July 24, 2007
Product Placement Revisited
On the drive into work, I was listening to NPR. One of the stories featured was on a technique of oil refining called gas flaring and how it disrupts life in Nigeria. The story started with instructions on using the Google Earth web site to see the gas flairs.
This is a modern, information superhighway example of product placement in the news. Google's geographic imaging search web site is prominently featured in a story that, in my humble opinion, is largely irrelevant to the search giant and vice versa.
What is the reason behind the product placement? I could not find any reference of Google or its C level executives being sponsors in NPR's latest annual report. I did run across a page reporting that Google allows NPR to cloak which is considered inappropriate in the world of SEO. Not exactly a dark conspiracy or scathing expose, eh?
Perhaps NPR's reporters are just very enthusiastic about the search giant's offerings. Whether or not there was any cash renumeration, it is still product placement.
What is your opinion? Is product placement wrong or OK?
This is a modern, information superhighway example of product placement in the news. Google's geographic imaging search web site is prominently featured in a story that, in my humble opinion, is largely irrelevant to the search giant and vice versa.
What is the reason behind the product placement? I could not find any reference of Google or its C level executives being sponsors in NPR's latest annual report. I did run across a page reporting that Google allows NPR to cloak which is considered inappropriate in the world of SEO. Not exactly a dark conspiracy or scathing expose, eh?
Perhaps NPR's reporters are just very enthusiastic about the search giant's offerings. Whether or not there was any cash renumeration, it is still product placement.
What is your opinion? Is product placement wrong or OK?
Thursday, July 19, 2007
The Blurring of Advert and Content
I recently ran across an article on how to write better headlines. Yea, I thought, I'll improve my blog by writing better blog titles. So, I read the article.
Basically, the gist of that article is this. Make a big, explicit promise in your headline to attract more readers. There are variations on this theme like phrasing the headline like a how to get something desirable or avoid something undesirable. Other suggestions include using popular keywords and using numbered lists, all in the service of attracting readers by promising something for nothing.
I recognized this advice as techniques used in effective advertising. I realized that, to the writer of this article that I read, all professional blogs are advertisements cleverly disguised as content.
I remember seeing this billboard back in the 80's. It was mostly blank with the following words. Advertising, because without it, you wouldn't know. The trend to blur the distinction between content and advertising has continued to escalate ever since.
Why? Well, back in the 80's people generally tended to view content as desirable and advertising as a necessary evil for getting content for free or almost free. So, you get to watch your favorite TV program but at a cost of having to watch the commercials or you get to read news over the web but at a cost of having a banner ad display along with it. I believe that when content and advertising is framed separately, people get annoyed at it and learn techniques for avoiding the advertising. TiVo or popup blocker, anyone?
The advertisers don't want to be avoided. They paid good money for mind share. So the media channels, wanting more advertising money, work out ways to blur the distinction between the desired content and the annoying advert. Ergo, product placement or, apparently, blogs (gulp).
Some blogs. Not all blogs. I still believe in the blog as opinion piece rather than puff piece. Opinion pieces are what you will see here. Pieces that share my observations, hopefully in a way that invites you to think about what's important to you. That is why I have a blog titled What Does it Mean to be of Service instead of How to Beat the Repairman at His Own Scam.
Of course, if the advert was desirable, then the blurring of advert with content would not be so annoying. Perhaps that is the promise of targeted advertising. The cost might be the loss of privacy as your consumer patterns would get shared like your email address does amongst the spammers.
What do you believe? Should advert and content be separate? Is convergence with targeted advertising OK? Should your consumer patterns be considered private knowledge or public domain?
Basically, the gist of that article is this. Make a big, explicit promise in your headline to attract more readers. There are variations on this theme like phrasing the headline like a how to get something desirable or avoid something undesirable. Other suggestions include using popular keywords and using numbered lists, all in the service of attracting readers by promising something for nothing.
I recognized this advice as techniques used in effective advertising. I realized that, to the writer of this article that I read, all professional blogs are advertisements cleverly disguised as content.
I remember seeing this billboard back in the 80's. It was mostly blank with the following words. Advertising, because without it, you wouldn't know. The trend to blur the distinction between content and advertising has continued to escalate ever since.
Why? Well, back in the 80's people generally tended to view content as desirable and advertising as a necessary evil for getting content for free or almost free. So, you get to watch your favorite TV program but at a cost of having to watch the commercials or you get to read news over the web but at a cost of having a banner ad display along with it. I believe that when content and advertising is framed separately, people get annoyed at it and learn techniques for avoiding the advertising. TiVo or popup blocker, anyone?
The advertisers don't want to be avoided. They paid good money for mind share. So the media channels, wanting more advertising money, work out ways to blur the distinction between the desired content and the annoying advert. Ergo, product placement or, apparently, blogs (gulp).
Some blogs. Not all blogs. I still believe in the blog as opinion piece rather than puff piece. Opinion pieces are what you will see here. Pieces that share my observations, hopefully in a way that invites you to think about what's important to you. That is why I have a blog titled What Does it Mean to be of Service instead of How to Beat the Repairman at His Own Scam.
Of course, if the advert was desirable, then the blurring of advert with content would not be so annoying. Perhaps that is the promise of targeted advertising. The cost might be the loss of privacy as your consumer patterns would get shared like your email address does amongst the spammers.
What do you believe? Should advert and content be separate? Is convergence with targeted advertising OK? Should your consumer patterns be considered private knowledge or public domain?
Labels:
advertising,
blogging,
commercials,
consumer,
content,
privacy,
TV
Thursday, July 12, 2007
In Defence of Design Patterns
There have been some recent blogs where the GoF Design Patterns book has come under some criticism lately. As a senior manager and developer of business applications, I feel the need to weigh in on this subject.
But first, let me discuss some history to set the proper context. Writing software for complex systems is very hard and error prone, always has been, always will be. Twenty years ago, a group of computer scientists asked themselves why this was so and came up with a new way of designing software with a focus on managing complexity. This was called Object Oriented Programming.
The problem was that for all of its promises and good intentions, OOP wasn't really decreasing the probability of project failure all that much. It turns out that there are plenty of ways to design a fully OOP compliant system that was just as bad as any monolithic system.
Ten years ago, the GoF book was first published. I believe that it was published to show developers how to create OOP designs that were useful. I learned a lot from it and recommend it to any developer who has even the slightest interest in developing OOP systems. There aren't too many other ten year old software development books out that there that I would give such a blanket recommendation for, by the way.
The two biggest criticisms that I have encountered about this book can be summed up as follows. It doesn't really address the needs of unit testing and it encourages a mindless copy and paste mentality to designing systems.
With regards to unit testing, I believe that the mediator pattern would best be applied to solving this problem. That pattern is one of the original patterns from the book. If you read that pattern, then you won't find any mention of solving unit testing problems. That is because unit testing didn't really get a big emphasis until around the time of eXtreme Programming which came after design patterns. Software engineering is still very much in its infancy and we still have a lot to learn about writing complex systems with minimal risk. Modern IoC systems, like spring, make it easy to write unit tests for n-tier systems because they promote loose coupling which makes it just as easy for a middle tier application to be called from a web server as it is to be called from a JUnit test program. Loose coupling is what the mediator pattern is all about.
With regards to copy and paste mentality, I do not believe that the GoF book promotes that. They explain the concepts and the drivers behind each pattern using traditional OOP notation systems which, at that time, meant Booch 93 (which is outdated. The current OOP notation system is UML). There are code samples but I do not believe that the intent of the code samples is to copy and paste them into your solution. Rather, I believe that the code samples were included for illustrative purposes only. The book gives advice and examples on doing good OOP design but not cookie cutter recipes.
What do you think? Are design patterns an effective way of learning good OOP design or are they an irrelevant crutch?
But first, let me discuss some history to set the proper context. Writing software for complex systems is very hard and error prone, always has been, always will be. Twenty years ago, a group of computer scientists asked themselves why this was so and came up with a new way of designing software with a focus on managing complexity. This was called Object Oriented Programming.
The problem was that for all of its promises and good intentions, OOP wasn't really decreasing the probability of project failure all that much. It turns out that there are plenty of ways to design a fully OOP compliant system that was just as bad as any monolithic system.
Ten years ago, the GoF book was first published. I believe that it was published to show developers how to create OOP designs that were useful. I learned a lot from it and recommend it to any developer who has even the slightest interest in developing OOP systems. There aren't too many other ten year old software development books out that there that I would give such a blanket recommendation for, by the way.
The two biggest criticisms that I have encountered about this book can be summed up as follows. It doesn't really address the needs of unit testing and it encourages a mindless copy and paste mentality to designing systems.
With regards to unit testing, I believe that the mediator pattern would best be applied to solving this problem. That pattern is one of the original patterns from the book. If you read that pattern, then you won't find any mention of solving unit testing problems. That is because unit testing didn't really get a big emphasis until around the time of eXtreme Programming which came after design patterns. Software engineering is still very much in its infancy and we still have a lot to learn about writing complex systems with minimal risk. Modern IoC systems, like spring, make it easy to write unit tests for n-tier systems because they promote loose coupling which makes it just as easy for a middle tier application to be called from a web server as it is to be called from a JUnit test program. Loose coupling is what the mediator pattern is all about.
With regards to copy and paste mentality, I do not believe that the GoF book promotes that. They explain the concepts and the drivers behind each pattern using traditional OOP notation systems which, at that time, meant Booch 93 (which is outdated. The current OOP notation system is UML). There are code samples but I do not believe that the intent of the code samples is to copy and paste them into your solution. Rather, I believe that the code samples were included for illustrative purposes only. The book gives advice and examples on doing good OOP design but not cookie cutter recipes.
What do you think? Are design patterns an effective way of learning good OOP design or are they an irrelevant crutch?
Wednesday, July 11, 2007
Authority or Community?
In a recent exchange at an online forum, I was criticized about this blog because I would comment on other blog entries and hyperlink to the relevant entry. The other person felt that "Improving incrementally on someone else's idea garners you less attention and respect than breaking new ground."
I'm sure that is very important if your goal is to garner more respect and attention than anyone else. I would prefer to view the Internet as a breeding ground for an open and free exchange of ideas where the goal is collaboration and mutual respect rather than a battlefield where you are supposed to beat everyone else in the game of gaining mind share. I suspect that the original inventor of the World Wide Web had something similar in mind when he proposed hyper linking as the fundamental feature of this new technology.
I don't believe that competition is always bad. If you are trying to build a web publishing business and you want to attract venture capital, then you are going to have to focus on beating your competition to become the most attractive business. That means garnering more mind share than your competitors. That means improving SEO so that searches list your site above those of your competitors. According to Google's own documentation, that means you don't link to your competitors.
Too much of anything isn't good, however. I believe that there is room in this world for cooperation too. It is dysfunctional if your competition button is always stuck in the on position and your cooperation button rusts away from disuse.
Are you somehow inferior for discussing with your friends what happened on a TV show last night? Shouldn't you be inventing your own shows instead of commenting on others? There is nothing wrong with inventing your own shows, of course; however, I believe that there is also nothing wrong with sharing your views on news, events, culture, commentary, society, sports with others in a dialog of free speech. How else can we discover our differences and find commonality? How else can we learn to relate to each other?
That is why you will always find links in my blog postings to other sources. I will not simply rehash what was said elsewhere but use that originating post as a starting point to tell my own story.
What do you believe? Is authority more important than community or vice versa?
I'm sure that is very important if your goal is to garner more respect and attention than anyone else. I would prefer to view the Internet as a breeding ground for an open and free exchange of ideas where the goal is collaboration and mutual respect rather than a battlefield where you are supposed to beat everyone else in the game of gaining mind share. I suspect that the original inventor of the World Wide Web had something similar in mind when he proposed hyper linking as the fundamental feature of this new technology.
I don't believe that competition is always bad. If you are trying to build a web publishing business and you want to attract venture capital, then you are going to have to focus on beating your competition to become the most attractive business. That means garnering more mind share than your competitors. That means improving SEO so that searches list your site above those of your competitors. According to Google's own documentation, that means you don't link to your competitors.
Too much of anything isn't good, however. I believe that there is room in this world for cooperation too. It is dysfunctional if your competition button is always stuck in the on position and your cooperation button rusts away from disuse.
Are you somehow inferior for discussing with your friends what happened on a TV show last night? Shouldn't you be inventing your own shows instead of commenting on others? There is nothing wrong with inventing your own shows, of course; however, I believe that there is also nothing wrong with sharing your views on news, events, culture, commentary, society, sports with others in a dialog of free speech. How else can we discover our differences and find commonality? How else can we learn to relate to each other?
That is why you will always find links in my blog postings to other sources. I will not simply rehash what was said elsewhere but use that originating post as a starting point to tell my own story.
What do you believe? Is authority more important than community or vice versa?
Monday, July 9, 2007
Making Fun of MMORPG
Two of my favorite interests are media and technology. This blog post discusses two very funny convergences of these two topics that I ran across recently.
The technology being parodied is the MMORPG or Massive Multiplayer Online Role Playing Game. The media that is doing the parody are The Simpson's and an entry on YouTube.
If you do not already know about The Simpson's and you are reading this blog entry, then you are either not a member of the human race or today is the first day out of your twenty year old coma. I won't even try to describe this TV cartoon show to you. The episode to watch is episode 17 from the 18th season, entitled Marge Gamer. After being publicly embarrassed by not having an email address, Marge decides to explore this Internet thing and ends up subscribing to a parody of Ever Quest. Once again, Groening does an excellent job being on target with the subject of his good natured ridicule.
It is possible to be alive in the age of the Internet and not know about You Tube but don't admit that or you might be branded as a Luddite. You Tube is a social networking site whose media is video. Some former PayPal employees started this puppy and was able to sell it to Google about a year and a half later. If you don't know about You Tube, then you don't have young children or you don't talk to them. You Tube is very popular amongst the younger demographic.
The video that I ran across on You Tube is a great parody of Second Life but only if you have ever been on Second Life. Otherwise, you won't understand it at all. It could be argued that Second Life is not a MMORPG. It is massive. It is multi-player. It is online. The only role that you are playing is whatever you decide the persona of your avatar is. Is it a game? Well, it is a game in the infinite sense but not in the finite sense. It could also be branded as a social networking site with a 3D virtual world interface.
What are Finite and Infinite Games? To answer that, you should read James Carse's book of the same name. I have also blogged about this elsewhere.
The technology being parodied is the MMORPG or Massive Multiplayer Online Role Playing Game. The media that is doing the parody are The Simpson's and an entry on YouTube.
If you do not already know about The Simpson's and you are reading this blog entry, then you are either not a member of the human race or today is the first day out of your twenty year old coma. I won't even try to describe this TV cartoon show to you. The episode to watch is episode 17 from the 18th season, entitled Marge Gamer. After being publicly embarrassed by not having an email address, Marge decides to explore this Internet thing and ends up subscribing to a parody of Ever Quest. Once again, Groening does an excellent job being on target with the subject of his good natured ridicule.
It is possible to be alive in the age of the Internet and not know about You Tube but don't admit that or you might be branded as a Luddite. You Tube is a social networking site whose media is video. Some former PayPal employees started this puppy and was able to sell it to Google about a year and a half later. If you don't know about You Tube, then you don't have young children or you don't talk to them. You Tube is very popular amongst the younger demographic.
The video that I ran across on You Tube is a great parody of Second Life but only if you have ever been on Second Life. Otherwise, you won't understand it at all. It could be argued that Second Life is not a MMORPG. It is massive. It is multi-player. It is online. The only role that you are playing is whatever you decide the persona of your avatar is. Is it a game? Well, it is a game in the infinite sense but not in the finite sense. It could also be branded as a social networking site with a 3D virtual world interface.
What are Finite and Infinite Games? To answer that, you should read James Carse's book of the same name. I have also blogged about this elsewhere.
Sunday, July 8, 2007
New Coolness for the Casual Gamer
I have recently run across an interesting and innovative trend in gaming, the live Linux games DVD. This is especially interesting to me because I am only a very casual gamer. I just want to spend thirty minutes gaming as a release after a hard day. I'll play one game for a while, get bored, then drop it for another. I don't want to spend a lot of time, energy, and money building up a game persona or leveling up a RPG character like you would developing a career. This is where the live Linux game DVD comes in.
Linux is an O.S. where you don't have to install it on your hard drive. There are many what are called live DVD distributions of Linux. You just boot off of the DVD and you are running Linux. You will, most probably, have to make a BIOS change in order to boot off of the DVD player. There are plenty of live DVD Linux distributions out there. My favorites of this category are Knoppix and Ubuntu. The live DVD version of Ubuntu is a little hard to find. I recommend using a bit torrent client to download it. Here is where you can find the relevant tracker.
The open source roots of Linux make it to be very customizable and Linux installers are also very customizable. There are different Linux distros (short for distribution) for just about any need under the sun. Casual gaming has been recognized as one of those needs. A couple of years back, I ran across a variant of Knoppix devoted to gaming. It did have some cool games on it but most of the games were not very good. It felt like the makers of this distro didn't have enough commitment to make this bootable DVD really exciting.
Recently, I ran across a great bootable Linux DVD for games called Live Linux Gamers. This is the one to watch. There are games for every genre lover. Warsow and Tremulous are both great networked, team based first person shooters. Sauerbraten is also a first person shooter but with a great collection of maps built in for some fun single player action. Neverball is a nice 3D platform game. Glest and warzone 2100 are real time strategy games. The former is set in a medieval (i.e. warcraft) setting while the later is more futuristic (i.e. Sid Meier's Alpha Centauri).
Wesnoth is a nice turn based strategy game. Torc is a cool 3D car racing game and gl-117 is an action flight simulator that is worthy of playing. There are other games on this DVD so you won't be in danger of getting bored anytime soon. I only mentioned my favorites.
Although the creative is not as talented as their proprietary counterparts, I prefer these over the high-end, expensive games for reasons that are really compelling for the casual gamer. There is nothing to install on your hard drive so you don't take any risks with messing up your system. These games are made by game lovers so the game play itself is excellent. There is a wide and disparate selection so if you get bored with one, just go to the next. Best of all, they're free.
Linux is an O.S. where you don't have to install it on your hard drive. There are many what are called live DVD distributions of Linux. You just boot off of the DVD and you are running Linux. You will, most probably, have to make a BIOS change in order to boot off of the DVD player. There are plenty of live DVD Linux distributions out there. My favorites of this category are Knoppix and Ubuntu. The live DVD version of Ubuntu is a little hard to find. I recommend using a bit torrent client to download it. Here is where you can find the relevant tracker.
The open source roots of Linux make it to be very customizable and Linux installers are also very customizable. There are different Linux distros (short for distribution) for just about any need under the sun. Casual gaming has been recognized as one of those needs. A couple of years back, I ran across a variant of Knoppix devoted to gaming. It did have some cool games on it but most of the games were not very good. It felt like the makers of this distro didn't have enough commitment to make this bootable DVD really exciting.
Recently, I ran across a great bootable Linux DVD for games called Live Linux Gamers. This is the one to watch. There are games for every genre lover. Warsow and Tremulous are both great networked, team based first person shooters. Sauerbraten is also a first person shooter but with a great collection of maps built in for some fun single player action. Neverball is a nice 3D platform game. Glest and warzone 2100 are real time strategy games. The former is set in a medieval (i.e. warcraft) setting while the later is more futuristic (i.e. Sid Meier's Alpha Centauri).
Wesnoth is a nice turn based strategy game. Torc is a cool 3D car racing game and gl-117 is an action flight simulator that is worthy of playing. There are other games on this DVD so you won't be in danger of getting bored anytime soon. I only mentioned my favorites.
Although the creative is not as talented as their proprietary counterparts, I prefer these over the high-end, expensive games for reasons that are really compelling for the casual gamer. There is nothing to install on your hard drive so you don't take any risks with messing up your system. These games are made by game lovers so the game play itself is excellent. There is a wide and disparate selection so if you get bored with one, just go to the next. Best of all, they're free.
Saturday, July 7, 2007
Happy Birthday Mr. Heinlen
Apparently, today is Robert Heinlein's birthday. What self respecting male American geek could claim not to have read some of this man's books in his youth? I believe that the first book of his that I read at a pretty early age was Podkayne of Mars. His most famous book, of course, was Stranger in a Strange Land. Who could have known that the utterly wild premise of this book, that a science fiction writer could orchestrate a new religion, would become true one day?
But it was Heinlein's book I Will Fear No Evil that revealed to me the true reason why death is, ultimately, a good thing. If no one died, then there would be no room for children and we would all suffocate in our own endless supply of complaints, fear, cynicism, and irregular bowel movements.
But it was Heinlein's book I Will Fear No Evil that revealed to me the true reason why death is, ultimately, a good thing. If no one died, then there would be no room for children and we would all suffocate in our own endless supply of complaints, fear, cynicism, and irregular bowel movements.
Friday, July 6, 2007
What Does it Mean to be of Service?
At the beginning of summer, my air conditioning broke down. I called a local air conditioning service company to send a technician out to repair the unit. It is a central HVAC unit for the entire house. He recommended replacing it even before laying eyes on it. After some argumentation, he got to work. He cleaned and recharged the unit which got it working again. His hourly rate is $90.
A week later, our air conditioning was broken again. We called and complained enough to get the owner out to make a visit. We thought that what he promised was that he would fix it and not charge for the subsequent service call. He later claimed that he would not charge only if he felt that the original service technician did not do his job correctly.
I was present during the second visit. He opened up the unit and found that the electrical wire from the compressor to the contactor had melted. He replaced the contactor (which had some carbon scoring on it) and he replaced the compressor wire with a thicker wire. The original wire was 14 gauge and he replaced it with a 10 gauge wire. He also charged for the service call with the same hourly rate of $90.
I complained about the charge of the additional service call. The owner claimed that the original technician did nothing wrong. I said that he should have identified that the compressor wire was the wrong gauge. He countered that the incorrect wire was what was originally installed on the unit and not anything that his technician had installed.
Which leads me to this question. What does it mean to be of service? If you claim to be a full service company, then is it enough just to get the unit working long enough to get out the door or do you inspect everything and give your best advice on what is truly needed?
When you take your car in to get an oil change, they don't just change the oil. They change the oil filter. They inspect your tires. They inspect your air filter. They check and fill all fluids, if necessary. If all they did was change your oil, would you be satisfied? Would you feel that you had been served?
Let's pretend that you wanted to add a jacuzzi to your house and called in an electrician to run another power line. The first thing that electrician would do is inspect your breakout box. If it wasn't up to code, then he would tell you so. He would not just run the power line, charge for his time, and hope that the house doesn't catch fire before the check clears.
I presented all this to the owner. He then claimed that he, with thirty years experience, could catch something that subtle but that it was unreasonable to expect the technician, with only six months experience, to catch the problem that my air conditioning had. Is it OK to claim to be of service but to use under-qualified or under-trained people? Where is the standard of excellence here? At the very least, he should charge a different hourly rate for trainees.
What do you think? What does it mean to be of service?
A week later, our air conditioning was broken again. We called and complained enough to get the owner out to make a visit. We thought that what he promised was that he would fix it and not charge for the subsequent service call. He later claimed that he would not charge only if he felt that the original service technician did not do his job correctly.
I was present during the second visit. He opened up the unit and found that the electrical wire from the compressor to the contactor had melted. He replaced the contactor (which had some carbon scoring on it) and he replaced the compressor wire with a thicker wire. The original wire was 14 gauge and he replaced it with a 10 gauge wire. He also charged for the service call with the same hourly rate of $90.
I complained about the charge of the additional service call. The owner claimed that the original technician did nothing wrong. I said that he should have identified that the compressor wire was the wrong gauge. He countered that the incorrect wire was what was originally installed on the unit and not anything that his technician had installed.
Which leads me to this question. What does it mean to be of service? If you claim to be a full service company, then is it enough just to get the unit working long enough to get out the door or do you inspect everything and give your best advice on what is truly needed?
When you take your car in to get an oil change, they don't just change the oil. They change the oil filter. They inspect your tires. They inspect your air filter. They check and fill all fluids, if necessary. If all they did was change your oil, would you be satisfied? Would you feel that you had been served?
Let's pretend that you wanted to add a jacuzzi to your house and called in an electrician to run another power line. The first thing that electrician would do is inspect your breakout box. If it wasn't up to code, then he would tell you so. He would not just run the power line, charge for his time, and hope that the house doesn't catch fire before the check clears.
I presented all this to the owner. He then claimed that he, with thirty years experience, could catch something that subtle but that it was unreasonable to expect the technician, with only six months experience, to catch the problem that my air conditioning had. Is it OK to claim to be of service but to use under-qualified or under-trained people? Where is the standard of excellence here? At the very least, he should charge a different hourly rate for trainees.
What do you think? What does it mean to be of service?
Thursday, July 5, 2007
A Very Foggy Vista
In a recent announcement, Dell has warned businesses against upgrading to the latest Operating System written by Microsoft. Called Vista, this O.S. has suffered years of delay and many cutbacks in promised features and functionality. Dell is recognizing and admitting that the Vista O.S. requires an amount of computer hardware resources that seem to be disproportionate to its abilities, especially when compared to previous MSFT O.S.
As a longtime participant in the world of software development, I have been around to see a lot of products come out of Redmond. This blog has my own reflections on what has recently come out of this software giant. I have no transparency nor intelligence into the internal workings of MSFT so these musings are simply based on observing and evaluating their output over a long period of time and not on any knowledge of their internal processes outside what they themselves have published.
Microsoft is, mostly, a software company. They produce products in many markets. I believe that, historically speaking, their strongest markets are O.S. and developer tools. From an end user perspective, Windows 3.1 and DOS was much superior when compared to the competition which was IBM's OS/2 and the X Windowing system of Unix. This was back in the 80s. In the 90s, Windows still led the pack with Windows 95 and NT. Even the early 2000s saw Microsoft still producing a great O.S. with Windows 2000, XP, and 2003.
With apologies to George Lucas, there has been a disturbance in the force recently. To put it bluntly, the latest MSFT O.S. is a step backwards. Gamers complain about Vista's sub-par 3D graphics performance. Security experts dismiss Vista's UAC approach. Although I don't use it on a daily basis, I have had occasion to work on a laptop where Vista came pre-installed. Although the hardware specifications of this laptop exceed MSFT's recommendations for running Vista, the performance is very lackluster. The latency is worse than another laptop, purchased four years ago, that runs Windows XP. How the GUI is organized is another complaint of mine. Everything has been refactored in a way that seems to me to be arbitrary. So, I'm having to learn a new organization of the computer's configuration without seeing any meaningful benefit from that new way. Another change that seems to be made simply for the sake of change is the move away from using menus which are now either cleverly disguised or completely removed. Why the GUI designers of Redmond have decided that menus are passé is completely beyond me.
And Microsoft's O.S. competition is catching up. I recently ran the live CD version of a recently very trendy distribution of Linux called Ubuntu. In my humble opinion, performance with Ubuntu was better than with Vista. This is especially disturbing since Ubuntu was running off of the DVD player and Vista was running off of the hard disk, which should have given MSFT an unfair advantage in performance. The Ubuntu GUI made more sense and was a lot more organized. Also, you get a lot more features with Ubuntu than with Vista since it comes pre-installed with an office productivity suite and a lot of other applications which Vista does not.
I have been working in the IT industry for more years than I care to reveal and I am very technical so I am not the target audience for this O.S. I would much prefer using Windows 2000 or 2003 than either XP or Vista, even as a workstation. The owner of this Vista laptop is a very representative member of the target audience who also prefers using XP to Vista.
Many industry pundits claim that it is the age of the network, and they are right. The days of the stand-alone computer are all but gone. However, everyone gets to a network through a computer and that computer, like every other computer, requires an O.S. in order to turn what would otherwise be a very expensive doorstop into a wonderful window on the world wide web. Perhaps MSFT has spent so much time sighting Google in their cross hairs, they have forgotten or gotten away from their core value proposition.
As a longtime participant in the world of software development, I have been around to see a lot of products come out of Redmond. This blog has my own reflections on what has recently come out of this software giant. I have no transparency nor intelligence into the internal workings of MSFT so these musings are simply based on observing and evaluating their output over a long period of time and not on any knowledge of their internal processes outside what they themselves have published.
Microsoft is, mostly, a software company. They produce products in many markets. I believe that, historically speaking, their strongest markets are O.S. and developer tools. From an end user perspective, Windows 3.1 and DOS was much superior when compared to the competition which was IBM's OS/2 and the X Windowing system of Unix. This was back in the 80s. In the 90s, Windows still led the pack with Windows 95 and NT. Even the early 2000s saw Microsoft still producing a great O.S. with Windows 2000, XP, and 2003.
With apologies to George Lucas, there has been a disturbance in the force recently. To put it bluntly, the latest MSFT O.S. is a step backwards. Gamers complain about Vista's sub-par 3D graphics performance. Security experts dismiss Vista's UAC approach. Although I don't use it on a daily basis, I have had occasion to work on a laptop where Vista came pre-installed. Although the hardware specifications of this laptop exceed MSFT's recommendations for running Vista, the performance is very lackluster. The latency is worse than another laptop, purchased four years ago, that runs Windows XP. How the GUI is organized is another complaint of mine. Everything has been refactored in a way that seems to me to be arbitrary. So, I'm having to learn a new organization of the computer's configuration without seeing any meaningful benefit from that new way. Another change that seems to be made simply for the sake of change is the move away from using menus which are now either cleverly disguised or completely removed. Why the GUI designers of Redmond have decided that menus are passé is completely beyond me.
And Microsoft's O.S. competition is catching up. I recently ran the live CD version of a recently very trendy distribution of Linux called Ubuntu. In my humble opinion, performance with Ubuntu was better than with Vista. This is especially disturbing since Ubuntu was running off of the DVD player and Vista was running off of the hard disk, which should have given MSFT an unfair advantage in performance. The Ubuntu GUI made more sense and was a lot more organized. Also, you get a lot more features with Ubuntu than with Vista since it comes pre-installed with an office productivity suite and a lot of other applications which Vista does not.
I have been working in the IT industry for more years than I care to reveal and I am very technical so I am not the target audience for this O.S. I would much prefer using Windows 2000 or 2003 than either XP or Vista, even as a workstation. The owner of this Vista laptop is a very representative member of the target audience who also prefers using XP to Vista.
Many industry pundits claim that it is the age of the network, and they are right. The days of the stand-alone computer are all but gone. However, everyone gets to a network through a computer and that computer, like every other computer, requires an O.S. in order to turn what would otherwise be a very expensive doorstop into a wonderful window on the world wide web. Perhaps MSFT has spent so much time sighting Google in their cross hairs, they have forgotten or gotten away from their core value proposition.
Wednesday, July 4, 2007
A Good Cry for the Inner Child
The very process of growing up is also one of grieving for everyone looses something in the process of going from child to adult, innocence if nothing else. I just saw a film called the Bridge to Terabithia that will provide that inner child a great chance to express or relive that grief.
This is a great movie and I heartily recommend it to anyone and everyone. It is an old movie, if you count 1985 as old, so don't watch it with the expectations of what currently is popular in the cinema these days.
I am very fascinated over the role that both media and technology plays in the shaping and reshaping of the world mind or cyber-gnosis. In fact, I have another blog that focuses on that subject. The way they marketed this movie had nothing to do with what it is really about. The movie is marketed to be in the same genre as Disney's remake of the popular C. S. Lewis classic, The Lion, The Witch, and the Wardrobe. Not so, for this puppy. That's OK, as what you end up seeing is much, much better.
This is a great movie and I heartily recommend it to anyone and everyone. It is an old movie, if you count 1985 as old, so don't watch it with the expectations of what currently is popular in the cinema these days.
I am very fascinated over the role that both media and technology plays in the shaping and reshaping of the world mind or cyber-gnosis. In fact, I have another blog that focuses on that subject. The way they marketed this movie had nothing to do with what it is really about. The movie is marketed to be in the same genre as Disney's remake of the popular C. S. Lewis classic, The Lion, The Witch, and the Wardrobe. Not so, for this puppy. That's OK, as what you end up seeing is much, much better.
Subscribe to:
Posts (Atom)