Mashup Damp IV Day 2: Automating web services and mashups

I’m in a session on automating the discovery and consumption of web services to create mashups. Coming from the enterprise integration side, lots of this stuff is pretty familiar to me: using a directory service (e.g., UDDI) to discover service providers in both the corporate SOA and internet web services (SOAP, REST, AJAX, XML-RPC, JSON, etc.) and extract the service description (e.g., WSDL), but he’s talking about adding a lot more intelligence to the discovery stage.

They’re making some pretty esoteric points about RDF, OWL and the semantic web, and this moved for a while into a two-way conversation between the main presenter and someone from a vendor who is obviously deeply into these issues as well.

Then we get to an interesting echo of yesterday’s session on why DIY when there are APIs available, and someone stated that he was more likely to write something himself because it’s too difficult to find services/APIs on the web. The other side was considered now: what if you put an API out there, and people use it in “stupid” (i.e., inefficient) ways that bring down your service? I think that the API developers need to put some checks and balances in place, like key limiting; someone from Google pointed out that if you don’t do that, people won’t take care to optimize their code to minimize the API calls. In fact, there’s services available to handle the API keys and their usage, such as Mashery.

Leave a Reply