In addition to his consulting practice, Dr. Newcomb co-edits the ISO/IEC 13250 "Topic Maps" standard and co-chairs the annual Extreme Markup Languages Conferences.
The International Electronic Publishing Research Centre - IEPRC - is a non-profit distributing research centre dedicated to the development of electronic publishing. Dr. Blunden also serves as President of the Association of Independent Research and Technology Organisations (AIRTO).
Encouraging Dialogue between IT Developers and Purveyors of Knowledge-rich Products and Services
Web Services: Yet Another Incarnation of the Content vs. Process Dialogue?
Encouraging Dialogue between the Process and Content Communities
Forging Alliances Between Research Communities
Producers of Knowledge-rich Products and Services
Related Business Opportunities
What is Smart Content, and why is it important?
What Will the Knowledge Economy Bring?
In September 2003, Salzburg Research concluded a study on the "Future of Electronic Publishing towards 2010 [EP 2010] ", for the European Commission [EC], which provides significant funding for research in Europe. EP 2010 provides a framework for R&D endeavours in multimedia content creation, management and delivery, and it assesses the relevance of such research to European industry. The notion of "Smart Content" plays a central role in the study [EP 2010] .
The EP 2010 study is a large, multi-faceted document. We cannot hope to transmit or summarize it adequately in this paper, and, in any case, there is no reason to do so, since this worthy document speaks quite eloquently for itself. Instead, we offer some reflections and perspectives on its content, emphasizing some points to which we believe the attention of the European industrial and research communities should be drawn.
As members of the EP 2010 Steering Committee, we considered as fundamental the question, "What are the characteristics of the research most likely to enhance the competitiveness of the European economy?"
In the global marketplace, businesses must concentrate on giving value for money. The economies that best use their collective intelligence and creativity will be strongest, at least partly because they will discover compelling reasons to form alliances and exploit combinations of assets and competencies sooner and/or more efficiently than others. It seems reasonable to assume that, as the effective knowledge-richness of the products that circulate in an economy increase, the efficiency of that economy's exploitation of its knowledge will also improve.
Knowledge-richness and information technology are mutually dependent, but they are not the same. Even if there were no fundamental improvements in information and communication technology, the amount of knowledge-rich products and services would increase, driven by economic, technological and social needs:
Purveyors of knowledge-rich content would be well-advised to participate actively in the further development of information technology, rather than taking a more passive attitude. The generally accepted (in the publishing industry) view that improved information technology has not produced bottom line benefit reflects a failure to distinguish between systems of facilitation, on the one hand, and end product benefit by increased value-added offerings to the consumer, on the other. To evaluate the bottom-line impact of improved IT solely in terms of the savings it provides by facilitating existing business processes is to ignore the business opportunities it creates -- opportunities whose benefit to the bottom line may be far more significant. Enlightened active participation by content purveyors in IT development research would improve the likelihood of their developing new business models with significant potential for new profits.
On December 1, 2003, Wernher Behrendt, one of the authors of EP 2010, presented a summary of the study at a round table gathered by the EC (Directorate General E2, Knowledge Management and Content Creation). One of the participants was Michael Brodie, Chief Scientist of Verizon, who proposed that the central issue of Web Services is "semantics". Brodie's analysis of current web services was in line with EP 2010's perspective, that Web Services are no more and no less than the addition of remote procedure calls to the web, but, he argued, Microsoft and its partners are betting so much of their future software on what they call Web Services that there will be significant research issues emanating from their definition of "semantic interoperation".
Mr. Behrendt proposes that the most general definition of an interoperation specification -- for Web Services or anything else -- requires that both processes and content be semantically self-describing. He wonders whether the combination of self-describing processes (what they do) and self-describing content (what it is) are the "promised land of true interoperation". His reasoning goes something like this:
Considering both approaches at the same time can lead to an ideal world or to the worst possible world:
In the ideal world, the content structures are kept simple enough for well-defined operations to do meaningful things with the content, and the set of available operations provides enough scope to do anything that adds value, in a well-understood fashion.
In the worst possible world, the process definers have differing assumptions (or none at all!) regarding the content structures they expect to be manipulating, and vice-versa, the content definers have differing assumptions (or none at all) about how the content structures could and should be used by processes dealing with the content. Our approach to combat the latter scenario is to propose a common vision for the process-evangelists and the structure-gurus. The smart content vision tries to engage both groups in a dialogue which leads to a complementary distribution of semantics between content and processes, i.e. data and algorithms (cf. Dijkstra!).
The primary players in the knowledge economy, now and in the future, tend to fall into one of the following two categories:
Reasons for the Process Bias. The ostensible purpose of electronic publishing processes, including web services, is to establish communications channels between those who create content and those who consume it. Processes are are based on business models that rely on statistical truths, rather than on individual creativity, individual taste, or individual agenda. (For example, not everyone enjoys watching lengthy torture scenes, but, statistically speaking, there are enough people who do enjoy them that Mel Gibson's The Passion will make money.) Content just sits there; it doesn't generate profits. Processes, however, do generate profits, because they are places where tollbooths can be erected, and money can be collected.
Reasons for the Content Bias.From the perspective of content producers and consumers, content is the whole point of the communications channels provided by the Process-biased players. Most people don't care how the internet works, or how DVDs are manufactured, or how electricity is generated and distributed, or, for that matter, how web services work. But everybody would sorely miss content, if it stopped being available, just as everybody would sorely miss electricity. Indeed, the mere deployment and use of knowledge management systems is not by itself the strongest indicator of the emergence of the knowledge economy. Developers of new business models need to distinguish between the provision of enhanced systems, on the one hand, and the value-added activities that such systems enable, on the other.
Both groups are focused on what they perceive as the main chance, and, unsurprisingly, they have different agendas. However, in order to maximize overall industrial prosperity (and, not coincidentally, the prosperity of both the Process-biased and the Content-biased groups), the players in each group must seek to exploit the business models of the players in the other. Therefore, each EP research project that is intended to enhance European economic prosperity should conscientiously involve players that reflect both biases and serve both agendas.
For example, research in web services might be well-advised to consider that, if an interface to a Web Service is adequately self-describing, the content consisting of its incoming and outgoing messages should also be self-describing in the same terms, by leveraging the same description. Mr. Behrendt's "promised land of interoperation" is more likely to be reached if additional similar connections are conscientiously made between the perspective of the content-biased community and the offerings of the process-biased community.
If experience is any guide, the process-biased players will readily respond (and are already responding) to the self-description imperative for generalized services. The content-biased players, however, face a larger, more complex challenge in achieving true self-description for their often highly-specialized products. The systems offered by the process-biased players can be very complex, but at least they are necessarily already formally and explicitly described and codified.
By contrast, the work of the content-biased people often must encompass unbounded numbers of extremely diverse details of knowledge, often at levels of complexity that may be impractical to codify formally, much less to make fully and explicitly self-describing. As knowledge is increasingly embedded in technology, future business models for content industries need to take account not only of whatever business potential there may be for explicitly codifying the substance of knowledge, but also to make immediately practicable improvements in the availability and usefulness of:
The development of a competitive European "knowledge economy" will depend on collaborations between diverse research communities. Some projects involving collaborations among these communities are already operating, but there is room for more. The EC's Framework Programmes can organize and/or provide incentives for the organization of collaborative projects involving combinations of institutions and individuals with extraordinary qualifications in:
Organizations capable of producing, managing and competitively selling knowledge-rich products and services will be at the forefront of the knowledge economy. In order to develop such capabilities, they must invest in developing:
Organizations can capitalize on knowledge needs if they can sell technologies, services or consulting competitively. This suggests there will be a growing market for:
There is likely to be significant growth in e-publishing, digital broadcasting, digital television and digital games. The latter may provide models for interactive advertising. 
Collaborative research, especially when EC sponsorship helps to reduce the risks involved, is the logical target for strategic co-investments by organizations intending to be at the forefront of the knowledge economy. Even if such research projects do not produce their intended results, well-run projects can pay handsome dividends in the development of competencies and relationships, serendipitous technological and methodological benefits, and in discoveries leading to new business opportunities.
Europe, like every other regional economy, needs to maximize its capacity for opportunity discovery, knowledge trading, and alliance-making. The turbulence and competitiveness of the global marketplace is increasing, with no end in sight. No single economy can control the increase in knowledge that causes turbulence and dislocation; human creativity appears wherever there are human beings, and the demand for creative solutions always exceeds the supply. In order to make the most of its own creative capacities, Europe needs to support a knowledge economy that is at least as turbulent as, and is ideally more turbulent than, any other. The EC should consider focusing its research funding on revolutionary knowledge management paradigms, approaches, and technologies that will accelerate the exploitation of ideas in Europe.
The EP 2010 study proposes "Smart Content" as an organizing vision for diverse research intended to accelerate the exploitation of ideas. The "smartness" of content is its "ability to participate fully, at the semantic level, in the ambient intelligence space".
One definition of smartness is "readiness to be semantically integrated with other smart content", which means:
The notion of smartness applies to all semantics, including substantive, renditional, and "meta" (ontological, taxonomic) semantics, and to instrumental, developmental, and self-availing semantics.
The notion of Smart Content has some similarities to some visions for the "Semantic Web", but the notion of Smart Content does not necessarily respect all of the prevailing assumptions that guide current thinking about the Semantic Web. More generally, while it may be desirable, it is not necessary for Smart Content to conform to W3C Recommendations or to ISO standards. It is not necessary for developers of Smart Content or Smart Content technologies to wait for any organization to adopt their work. In order to make progress in developing a European knowledge economy, it is only necessary to contribute efficiency and effectiveness to the way in which Europe exploits its knowledge, creating European economic activity and European wealth. 
When we say that Smart Content "describes itself", we imply that the content knows about and includes (perhaps by reference) the ontologies on which it depends. It seems likely that collaborative research projects could, for example, productively address some fundamental questions:
With new content standards appearing almost daily, the latter questions -- how content assets that conform to diverse standards can be usefully integrated -- are particularly urgent. The good news is that these problems are not nearly as challenging as the infamous Natural Language Processing problem. At least for the semantics that they explicitly understand and support  , formal information interchange standards have far simpler, far more formalized and far more rigid grammars, ontologies and semantics than those of natural languages. At the same time, however, the problem cannot usefully be solved simply by creating some sort of Master Standard to which all others must be adapted, or that requires all others to be abandoned in favor of it.
Smart Content may retain (or actually consist of) a connection to its source, so that it can expire, be retracted, be revised, etc. It may also be desirable for Smart Content to include and somehow implement the license under which it was obtained. These kinds of possible features suggest worthwhile questions for research, including:
European research and technology programmes will affect the quality of life that future generations of Europeans will enjoy, provided Europe is able to use the developed technology in ways that are compatible with its societal visions.
EP 2010 raises a banner on which the words "Smart Content" are emblazoned. It's not completely clear what "Smart Content" means; it's an abstraction -- a rallying point. The phrase "Smart Content" draws attention only to the value of information, the ascension of the knowledge economy, and the inevitability of the fact that, as the future unfolds, information will increasingly support the illusion that it "knows" more and more -- that it is "smart". Importantly, the "Smart Content" banner does not exclude non-Europeans; all contributions to this European initiative are welcome, and non-Europeans are welcome to benefit from it, too.
Can (or will) Europe seize the opportunity to define its community as a knowledge economy, and its knowledge economy as its community? Realistically speaking, there are reasons to believe that Europe cannot or will not do any such thing, including:
Despite the obstacles, Europe's opportunity is real. Europe has a great wealth of talent, and its most gifted decision-makers are socially aware, intellectually alive, and visionary. Europe's industrial and academic research organizations can regard "Smart Content" as the rubric of an alliance that will benefit themselves, each other, and the public. Europe cannot avoid the disruptions that the advent of the knowledge economy will certainly bring, but Europe can choose to position itself as an important source of disruptive breakthroughs. Public participation in research funding is wise because even unsuccessful research generates significant economic activity, and it develops and improves the value of the community's human resources.
The worldwide knowledge economy will not spring into existence fully-formed, like Athena is said to have sprung from the forehead of Zeus. Instead, the worldwide knowledge economy will first be prototyped by the community that invents and exemplifies it. It is appropriate and timely for Europeans to ask themselves:
This idea, among others, was investigated in the IEPRC's "active-ad" project reported in its 500-page Interactive Advertising - a Guide to Success (http://www.ieprc.org). The project was supported by the European Commission; partners in the project included the World Federation of Advertisers, IAB Europe and the European Association of Communications Agencies.
The recent history of the nascent worldwide knowledge economy, including the history of the internet itself, strongly supports the view that standards rarely represent forward leaps in either theory or practice. Standards bodies have the most influence when they do a better job of codifying and rationalizing existing practices than others were in a position to do. Significant breakthroughs, on the other hand, can come from anywhere, but it is worth noting that they very often emanate from activities that enjoy significant public funding. There is nothing economically weak or historically suspect about applying public resources to the problem of making technological and/or economic progress possible. It is certainly a challenging endeavor, but it is not a wrongheaded one.
French philosopher Pierre Lévy, author of Collective Intelligence: mankind's emerging world in cyberspace and Canada Research Chair in Collective Intelligence, is doing promising work in development of a master compass system for navigating arbitrary knowledge spaces (http://www.collectiveintelligence.info) at the University of Ottawa.
Dublin Core, for example, is formal to the extent that it provides a way for a computer to tell what the title of a book is. The title itself, however, is normally a bit of natural language. Thus, the semantics of title-ness are formalized, but the semantics of title -- the semantic substance of the book -- are only formalized to the extent that the creator of the Dublin Core-conforming information has provided for such formalization.
Special thanks are due to Wernher Behrendt (Wernher.Behrendt@salzburgresearch.at) and Siegfried Reich (firstname.lastname@example.org) for their contributions to this paper.
The members of the Steering Committee of the EP2010 study were: Brian Blunden (International Electronic Publishing Research Centre (IEPRC), UK); Pascal Jacques (European Commission, DG Information Society, Unit E2, KMCC, LU); Roberto Minio (KnowledgeViews Ltd., UK); Erich J.Neuhold (Fraunhofer IPSI, DE); Steven R.Newcomb (Coolheads Consulting, USA); Philippe Wacker (European Multimedia Forum, BE).
The EP 2010 Study was funded by the European Commission, Information Society DG-E2 Knowledge Management and Content Creation, Roberto Cencioni (Head of Unit), Pascal Jacques (Head of Sector).
XHTML rendition created by gcapaper Web Publisher v2.1, © 2001-3 Schema Software Inc.