How can organizations ‘crowdsource‘ while still ensuring reliability/quality? I’ve discussed why software development is the first place to look when thinking about an issue that is relevant to every business.
The Business Readiness Rating (BRR) represents one effort to standardize appraisal of crowd-based software development. I’m posting the twelve criteria the BRR employs, and highlighting a few that speak to the unique dynamics of crowdsourcing:
Functionality – does the software meet user requirements?
Usability – is the software intuitive / easy to install / easy to configure / easy to maintain?
Quality – is the software well designed, implemented, and tested?
Security – how secure is the software?
Performance – how does the software perform against standard benchmarks?
Scalability – can the software cope with high-volume use?
Architecture – is the software modular, portable, flexible, extensible, and open. Can it be integrated with other components?
Support – how extensive is the professional and community support available?
Documentation – is there good quality documentation?
Adoption – has the software been adopted by the community, the market, and the industry?
Community – is the community for the software active and lively?
Professionalism – what level of professionalism does the development process and project organisation exhibit?
In the future, we’ll consider whether/how reliability and crowdsourcing can be balanced in other product development processes.
Presented in the context of a discussion about the internet and software, but a good metaphor for society, in general (from the Open Source Initiative):
Metcalfe’s Law predicts that the value of interoperability increases geometrically with the number of compatible participants, and Reed’s Law predicts that the utility of a network (implied by interoperable equivalence) increases exponentially due to the number of possible subgroups that interoperability enables. Both theories have successfully informed the investment of literally billions of dollars of capital investment as the Internet has become mainstream. Whichever law ultimately governs, interoperability is a positive function governing value, and thus any force that diminishes interoperability must be carefully scrutinized as it relates to ultimate and/or total value.
Open source software development highlights changes in production (cross-organization collaboration, privileging of contributory potential over proprietary rights) that will come to affect all production. The most radical developments are happening in the software realm because, among, perhaps, other reasons, (1) collaboration thereupon can happen across great distance, (2) the returns to productivity of innovation itself are great enough that it is sometimes rational to forgo returns to ownership, and (3) the networking benefits of contributing to an innovating community have grown tremendously with the development of the internet.
These trends affect all productive activity, though often in a less vivid manner.
My last post generated a query re: how people make money in the open source world. This report summarized a few angles on the topic:
Paid support (e.g., Red Hat and JBoss) — If you follow open source at all, you are probably familiar with the Red Hat and JBoss models, where most of their revenue derives not from selling software, but from varying levels of support packages.
Dual license (e.g., MySQL) — The approach taken by the popular open source database company MySQL offers the software under the General Public License (GPL) for open source developers. The catch with the GPL license is that if you bind closely to GPL code in your application, you must also GPL your code. For companies that decide they want to sell their application that incorporates MySQL, the organization offers a traditional paid license. Visit their site for a detailed explanation.
Upgrade to proprietary software (e.g., SourceFire and Sun) — I’m most familiar with this approach, as Sun uses this model with its tools line, offering an entry point with the open source IDE NetBeans. From there, if developers want all the bells and whistles, they can move up to Java Studio Creator or Java Studio Enterprise. The same holds true for OpenOffice.org; users who want support and advanced features buy StarOffice.
Offer a hosted service (e.g., SugarCRM) — Skok [“David Skok, a partner at VC firm Matrix“] noted that not long ago he’d felt application software would not be a likely area for open source to prosper, but he now feels that this startup may be onto something, with its hybrid model.
Open source software represents a new model in production: the fruits of the central productive activity are given away free. The dynamics that characterize software development, and give rise to open source communities, cannot help but spread to other areas of production, as software development is increasingly important everywhere.
A great video, which argues that patent law is suffocating software development. (Hat tip: Jeremy Epstein)
A reader request at Alexandria (another blog at which I post) got me digging into the open source side of Electronic Health/Medical Records. I was fascinated by a discussion of how a person can ascertain the stability of open source software as it shed light on both (a) the viability of open source software for the future of health record systems and (b) the ways in which we determine the reliability of a product, in general.
John, of the EMR and HIPAA blog, proffers that
The most important point to consider with an open source EMR is the health of the community surrounding the open source EMR. If the community is strong, then you’ll see some amazing things happen. If the community is weak, then the open source EMR will still be around in a few years, but no improvements to the software will be made. The way technology progresses means that your software must improve or it will be outdated in a couple years time. Continue reading
Few people fall in love with health information systems because of someone bombing battleships, but Peter Groen is an exception. In 1981, while working as a computer systems analyst at a hospital in Atlanta, he was part of a team that helped a paralyzed patient to communicate by carefully attaching an electrical lead to the man’s eyelid by which he was thence able to instruct the computer to “bomb” targets in an early Apple video game. “We knew that a thinking man was trapped inside that body when he was able to follow instructions and successfully play the game. We were then able to write a simple program that let the patient ‘write’ a simple message using a similar method of controlling the computer.” Groen was hooked: “I couldn’t see working in finance, marketing or sales after that. I wanted to use my knowledge to help people and working in health care was the right place to do that.” Continue reading
You may have read about the recent textbook revisions proposed in Texas (I include both a link to The Economist‘s take and a link to the detail-full, though more partisan take of The Huffington Post.). While The Economist, The New York Times, and others highlight the economic weight of textbook-related decisions by the State of Texas, I would argue that a more dramatic outcome may eventually involve the relative dimunition of Texas’ influence. Namely: Texas’ potential decision creates a perfect storm to increase interest in electronic and/or open source textbooks, which do not require economies of scale as large as does the current publishing regime.
Electronic innovators and open source writers tend to be (much more often than not) precisely the sort of folks who would most object to the Texas Board of Education recommendations. I would not be surprised to see an “alternative textbooks movement” take root in the short-term (and a less-covered Texas controversy took place on this front in recent months); I would, however, be quite surprised if such a movement does not materialize in the mid-term.