ZapThink just released a research paper about Data Mashups.
Extending the notion of mashups, data mashups will decouple data integration from heavy development cycles. But as ZapThink's Ron Schmelzer wrote this requires to have a strong Data Services Layer in place.
I fully agree with the following statements from the research:
"...the IT organization must give Service consumers the tools and methods they need to be able to successfully compose those Services with low cost and risk..."
And this exactly why Dynamic Data Services are so important. Having statically defined, hard-coded (or visually composed) data services could meet the requirements of statically defined service-oriented processes, but the reactive enterprise needs more flexibility. It is important to have the relevant data services available in real-time when data mashups will require ad-hoc data. A good Data Services Platform must support this kind of runtime generation and deployment of ad-hoc data services.
Later the author writes:
"...One of the important benefits of a Data Services layer is that it enables loose coupling between the applications using the Data Services and the underlying data source providers. Loose coupling enables data architects to modify, combine, relocate, or even remove underlying data sources from the Data Services layer without requiring changes to the interfaces that the Data Services expose. As a result, IT can retain control over the structure of data while providing relevant information to the applications that need it. Over time, this increased flexibility eases the maintenance of enterprise applications..."
In a world where most data sources will become service-oriented (even the databases themselves), it is important to be able to really achieve the decoupling between the data services and the data sources. In this particular case, this requires extended semantic metadata around data services, so that an advanced Data Services Platforms can dynamically recompose them at runtime, as requested by new data mashups.