Archive for the ‘Semantic Web’ Category
Really good news from Marc finally forced me out of my blogging slumber. IBM is going to offer a corporate social networking solution. IBM’s solution, called Lotus Connect, offers out of the box common social networking components for deployment within corporate environments.
The IBM package includes five applications: profiles, where employees post information about their expertise and interests; communities, which are formed and managed by people with common interests; activities, which are used to manage group projects; bookmarks, where people share documents and Web sites with others; and blogs, where people post ongoing commentaries.
This is great news because in effect IBM has validated the stance that social networking is indeed relevant in the corporate environment. The feature set mentioned above is a sub-set of what People Aggregator offers. And People Aggregator has been out in the market for several months now. But selling to corporates is hard, especially for startups. IBM’s announcement will have the effect of making big corporates take notice of social networking. The IBM marketing machinery will make sure that the corporate world gets adequately educated about the benefits of employing the “Web 2.0″ and social networking constructs to the workplace.
It is also heartening to know that we were ahead of the curve by some distance! It was almost two years back when the idea of developing People Aggregator as a reusable software download was conceived!
So hopefully next time Marc goes pitching to a big company, at least he wont have to answer the “Why would we want that?” question!
The much blogged about service Edgeio launched today. Edgeio lets bloggers post their classified ads directly to the Edgeio website by tagging their posts with the keyword “listing”. So basically Edgeio is a classifieds aggregator. I am not very convinced about this idea thus far. Classified ads, by definition, need fine classification to be relevant and easily discoverable. I need to be able to say what I am selling, for what price (and currency), what model, till what date is my ad valid etc. This is important because otherwise people can’t search on these criteria. I doubt that can be achieved with plain old text blog posts that Edgeio is harvesting right now.
This is a problem that has been given considerable thought and in fact this is at the core of the whole philosophy of the “semantic web”. Web 2.0 is supposed to be structured and machine readable. Blogs and RSS syndication was the first step in that direction. When you make a blog post, your RSS feed gets updated and hence in a way, your blog becomes machine readable. That is why sites like Technorati can know exactly when and what you blogged. And that is as far as services like Edgeio can get to today. But suppose you posted a review of a restaurant you went to, or a movie you watched, how do you let Technorati know that it is a review? And more specifically, a review of a movie which you rated 3 on a scale of 5? For a machine, it is all just plain text. So as far as a dumb machine (and they all are) is concerned, a blog post is no different from a movie review is no different from a classified ad.
Structured Blogging is an attempt to solve this problem. The basic approach is to allow the user to specify what type of content they are posting (review, event, blogpost, video, audio etc) and then based on the type of content, let them specify more specific “metadata”. The input data is then published as XML (using well defined schemas) as well as embedded (well defined) XHTML. Since the format is pre-defined, the content is machine readable.
While this might sound like lot of geek talk, wide adoption of structured blogging will have huge implications. Applications like Edgeio will be able to provide much better user experience. Search engines will become smarter at answering queries like “show me all events related to XML happening in Gurgaon between this and this date”. There will be services which will aggregate specific user generated content (as opposed to sites like Technorati which aggregate all types of blogs) like movie reviews or tech. events. The possibilities are endless and I am excited about this next to next generation “web 3.0″!
At Tekriti, we have added Structured Blogging support in most of the stuff that we do (GoingOn and People Aggregator being two big ones).
After two days of hectic running around setting up the new office, I finally got a chance to catch up on my blog reading. The last few days have been incredibly predictable in the blogosphere. First it was the skype-ebay deal (I was guilty of blogging that myself). Then everybody was blogging about “what is web 2.0″. 24 hours back Ning was hogging the limelight. Today its the sale of Weblogs Inc to AOL. Blog after blog talking about the same thing. Linking to the same url. Providing the same analysis in different words. Man, reading blogs is starting to get *gasp* boring! I think we tech. bloggers are getting too reactive in our blog posts. Worse, we all react to the same events. So suddenly all tech. blogs start looking same. I hope its a temporary phase and almost wish that nothing significant happen in the tech. world over the next few days. At least then we will get some meaningful and original blogs to read!
Rashmi Sinha has done a very cerebral (in the very sense of the word!) analysis of tagging. I will be lying if I said I understood all of it but it is nevertheless a fascinating attempt to explain the popularity of tagging using well established principles of congitive psychology.
Microsoft is going to offer new MSN APIs which will allow developers to access MSN Search, VirtualEarth, Messenger and MapPoint programatically and build their own applications around them. This is great! The new WWW is all about open APIs. Actually, Messenger APIs were available earlier as well. But only the most basic UI automation API was made available freely. The more useful API functions required use of a key from Microsoft which, from what I can make out, was almost impossible to get. I used these APIs quite a bit while I was with Microsoft. I hope the new APIs are not going to be just these old APIs repackaged differently.
Though Microsoft has finally arrived, I wonder why it was so late to the web APIs party. If you think about it, the credit for success of Windows platform goes to a large extent to the ease with which you can develop applications on it. The Win32 API along with VB saw hundereds of thousands of developers choose the Windows platform instead of Apple or Unix. For Windows division, developer audience has been super important, almost more important than the end user. Microsoft couldn’t possibly ship everything that an end user wanted. But by making Windows an easy to program platform, they ensured that almost any possible useful application would eventually be written by somebody outside of Microsoft. The same logic should have been applied to the web and MSN from day one. Microsoft could have gone ahead and created a “web platform” similar to Windows for desktop – complete with easy to use APIs. But instead, earlier attempts at providing APIs were very half hearted and hardly evangelized. I hope this time around things will be different. We will know in 2 days, when MSN unveils its shining new APIs here.
Of late we have been doing a lot of work in the rich internet application domain. We have used AJAX extensively to create slick user interfaces that look more and more like desktop applications. It amazes me all the time that the this technology existed for years and was never put to good use. Now that its finally gaining popularity and adoption, I get a feeling its time has alrady come and gone. Sure, previously unthought of web experiences are now possible with AJAX. But can DHTML and AJAX deliver on the thin client application-in-the-browser nirvana? I think not.
Firstly, coming from an operating system/application development background, the whole web programming model confounded me. There is no easy way to seperate the UI from the business logic. I have somewhat managed to achieve this seperation with ASP.NET and PHP5 classes but still, HTML remains a crappy way of developing user interfaces. Browser compatibility makes things worse (Safari has been the biggest PITA of late for us).
Secondly, AJAX is good for a “pull model”, i.e when data needs to be fetched from the backed based on user action. But a lot of scenarios require that data be made available to the client as and when it becomes available on the backend. Email clients checking for new email, or a radio station updating track information, chat applications etc are good examples. Right now, all these scenarios can be enabled only by periodic polling. Basically, true client-server model can not be achieved because the server can not callback to the client (this is not only an AJAX limitation. such is the nature of HTTP). Almost all networked desktop applications use a client-server model or a peer-to-peer model. Both these can not be achieved within the browser.
Technologies like Laszlo try to ease the issues with HTML. But the real culprit is the browser itself. The browser was meant to be just that – a tool for browsing through web pages. It was not meant to be a platform for hosting entire applications. That is what an operating system was supposed to be for. But with the recent trend towards thin computing, browser emerged at the natural thin client because it was already installed on everybody’s machines.
So here is my prediction for the future. Someobdy (Google?, Novatium?, Microsoft?) will develop this new uber-browser platform which will run on all major operating systems. This new platform will be like a mini-OS by itself complete with hooks and apis and a programming model geared towards developing feature rich desktop like applications which will run within this browser. The platform will allow creation of true desktop like applications which will be OS-agnostic. Most end users will not even realize the difference since it will be completely transparent to them. And THEN we would have attained thin client utopia!
Bill Burnham points out that according to a Neilson study only 11% of blog readers use RSS. I had gotten a similar impression while talking to fellow bloggers in Delhi. Many of those who themselves blog do not use an RSS aggregator.
In the light of these numbers, the recent announcement by Microsoft to treat RSS as a first class construct in Windows Vista is particularly significant. IE7 also supports RSS and allows viewing feeds formatted within the browser window. There is nothing like a gentle push from the software giant to increase adoption. That is the thing with standards. They don’t mean much unless people actually use them. Microsoft was pretty notorious for flouting standards and its heartening to see them support more and more of open standards!
Tantek Celik announced microformats.org at Supernova 2005. Marc is also there talking about reblgging (the misspelling is intentional!) – something that we have been closely involved with. I spent the last couple of weeks learning all about microformats and microcontent (and the differences between the two). I will leave it to Marc to talk more about it but needless to say I am super excited! The semantic web is becoming a reality fast!