Wednesday, November 26, 2008

OJM

Object to JDBC Mapping: OJM.
Seems to me when I first saw that new TLA that it was a pure oxymore.

Reading the CPO web site just confirmed this. There are so many strange amd sometimes inaccurate statements on this page. The author seems to ignore what modern persistence is.
I really hate to be harsh like that, but is there really a need in 2008, for yet another JDBC abstraction?

OJM

Object to JDBC Mapping: OJM.
Seems to me when I first saw that new TLA that it was a pure oxymore.

Reading the CPO web site just confirmed this. There are so many strange amd sometimes inaccurate statements on this page. The author seems to ignore what modern persistence is.
I really hate to be harsh like that, but is there really a need in 2008, for yet another JDBC abstraction?

Different problems, different databases

The big promise of relational databases was to have a unique, single technology for all our data storage needs. The main idea was to separate data from applications manipulating data.
Having data models too much coupled with applications model has indeed been recognized in the 70s as one of the main problems preventing IT flexibility (for instance, because producing new reports for business users required to go back to a full development cycle).

But at the same time, the design of software made significant progress by recommending encapsulating data (state) within methods (behavior), clearly going in an opposite direction. This all created a lot of stress and noises in the software industry, and eventually the emergence of persistence technologies.

But what really means decoupling data from applications? It mostly consists in removing explicit directional relationships from database schemas, so that data views can later be recombined in any way. When you think about it, it just means that relationships where poorly represented in programming languages, and it is still true with most modern languages including Java and C#. But to be honest, relationships are also poorly represented in relational models and this is not the nasty foreign keys that will change anything to it.

The fact is that the next real big revolution in the IT world would be a first comprehensive support for the notion of relationships.

Object database vendors failed to impose ODBMS while it was the most relevant choice for Java, at least from the technical point of view. There are many reasons to that:
  • Some first ODBMS implentations were very bad in terms of database administration, ad-hoc queries and overall performance. They were not really database but rather a storage mechanism for in-memory object pages.
  • The "ecology" never really started arounbd ODBMS (reporting tool...).
  • ODBMS started in the late 80s, exactly when RDBMS were just about to gain mometum on the market, it was not the right time to impose a new database technology.
  • Then major ODBMS vendors raised money from IPOs in the 95, in a very quiet time with no opportunity for expansion and no real need for money, therefore most of that money has been waste for nothing.
  • In 1998, the ODBMS vendors missed the Internet wave, mostly because of the XML mania at that time.
  • Then they surrendered and tried to reposition themselves as cache (Versant, Gemstone) or XML storage (Objectstore).

Hopefully, the XML database market never really emerged, despite the huge XML hype, probably because everybody understands XML is a good exchange format but a very bad (too verbose) storage format. The big problem with XML is still that it tends to impose hierarchical models, which to some extents are a kind of regression in our industry.

You can easily have an object or XML layer on top of any kind of storage, including relational (see IBM pureXML, for instance). Probably the best approach would be to have neutral and efficient storages, with multiple interfaces around them. It could be a kind of relational storage with the notion of relationship efficiently supported.

Internet, SOA, WOA, Web 2.0, mashups, etc. all favor a style where business functionalities become independent, and thus will have their own storage (as they cannot share a common database any longer, as they are really physically distributed).

It now seems some vendors are now trying to push the idea that even this low-level underlying storage layer, should have different foundations, depending on the kind of problems addressed. That's why the vertical model (storage primarily organized by columns instead of rows, like in Vertica) or the key-value model are quickly growing these days.

They will certainly not replace RDBMS systems any soon, as some are already claiming, but they will maybe impose themselves in some situations. Seems to me the time of the omniscient relational model is about to decline, even if it will remain present for decades.

Data services direclty impacts the database world because:

  • We have more and more data sources to access from a even a simple business application.
  • The notion of transaction is change.
  • We more and more frequently have to support asynchronous data access.
  • It becomes not only possible but also mandatory to access any kind of data sources, not only relational ones.
  • Databases are progressively commoditized, and their advanced features will move to intermediate mediation layers.
  • Then it is possible to choose the best database technology for a given need, at any time.

Different problems, different databases

The big promise of relational databases was to have a unique, single technology for all our data storage needs. The main idea was to separate data from applications manipulating data.
Having data models too much coupled with applications model has indeed been recognized in the 70s as one of the main problems preventing IT flexibility (for instance, because producing new reports for business users required to go back to a full development cycle).

But at the same time, the design of software made significant progress by recommending encapsulating data (state) within methods (behavior), clearly going in an opposite direction. This all created a lot of stress and noises in the software industry, and eventually the emergence of persistence technologies.

But what really means decoupling data from applications? It mostly consists in removing explicit directional relationships from database schemas, so that data views can later be recombined in any way. When you think about it, it just means that relationships where poorly represented in programming languages, and it is still true with most modern languages including Java and C#. But to be honest, relationships are also poorly represented in relational models and this is not the nasty foreign keys that will change anything to it.

The fact is that the next real big revolution in the IT world would be a first comprehensive support for the notion of relationships.

Object database vendors failed to impose ODBMS while it was the most relevant choice for Java, at least from the technical point of view. There are many reasons to that:
  • Some first ODBMS implentations were very bad in terms of database administration, ad-hoc queries and overall performance. They were not really database but rather a storage mechanism for in-memory object pages.
  • The "ecology" never really started arounbd ODBMS (reporting tool...).
  • ODBMS started in the late 80s, exactly when RDBMS were just about to gain mometum on the market, it was not the right time to impose a new database technology.
  • Then major ODBMS vendors raised money from IPOs in the 95, in a very quiet time with no opportunity for expansion and no real need for money, therefore most of that money has been waste for nothing.
  • In 1998, the ODBMS vendors missed the Internet wave, mostly because of the XML mania at that time.
  • Then they surrendered and tried to reposition themselves as cache (Versant, Gemstone) or XML storage (Objectstore).

Hopefully, the XML database market never really emerged, despite the huge XML hype, probably because everybody understands XML is a good exchange format but a very bad (too verbose) storage format. The big problem with XML is still that it tends to impose hierarchical models, which to some extents are a kind of regression in our industry.

You can easily have an object or XML layer on top of any kind of storage, including relational (see IBM pureXML, for instance). Probably the best approach would be to have neutral and efficient storages, with multiple interfaces around them. It could be a kind of relational storage with the notion of relationship efficiently supported.

Internet, SOA, WOA, Web 2.0, mashups, etc. all favor a style where business functionalities become independent, and thus will have their own storage (as they cannot share a common database any longer, as they are really physically distributed).

It now seems some vendors are now trying to push the idea that even this low-level underlying storage layer, should have different foundations, depending on the kind of problems addressed. That's why the vertical model (storage primarily organized by columns instead of rows, like in Vertica) or the key-value model are quickly growing these days.

They will certainly not replace RDBMS systems any soon, as some are already claiming, but they will maybe impose themselves in some situations. Seems to me the time of the omniscient relational model is about to decline, even if it will remain present for decades.

Data services direclty impacts the database world because:

  • We have more and more data sources to access from a even a simple business application.
  • The notion of transaction is change.
  • We more and more frequently have to support asynchronous data access.
  • It becomes not only possible but also mandatory to access any kind of data sources, not only relational ones.
  • Databases are progressively commoditized, and their advanced features will move to intermediate mediation layers.
  • Then it is possible to choose the best database technology for a given need, at any time.

New databases

Following my previous blog about the future of databases, I've seen these recent posts on TheServerSide:
  • Interview on CouchDB, the Apache project for a document database, written in Erlang, with an HTTP/REST/JSON API, already mentioned in this blog. Interesting point of view at the end about BigTable. For sure a big scalable, persistent Map is certainly interesting in some very specific cases, but it is not really a database at the end.
  • Scalaris, a scalable, transactional data store for Web 2.0 services. Yet another distributed, persistent key-value system. That one shares many design ideas with BigTable, while, like CouchDB, it is written in Erlang. The big addition seems to be a better support for transactions. See OnScale for more information (videos, slides...)

New databases

Following my previous blog about the future of databases, I've seen these recent posts on TheServerSide:
  • Interview on CouchDB, the Apache project for a document database, written in Erlang, with an HTTP/REST/JSON API, already mentioned in this blog. Interesting point of view at the end about BigTable. For sure a big scalable, persistent Map is certainly interesting in some very specific cases, but it is not really a database at the end.
  • Scalaris, a scalable, transactional data store for Web 2.0 services. Yet another distributed, persistent key-value system. That one shares many design ideas with BigTable, while, like CouchDB, it is written in Erlang. The big addition seems to be a better support for transactions. See OnScale for more information (videos, slides...)

Tuesday, November 25, 2008

The Future of Databases

Last week in San Jose, during the Data Services World event, I've been participating to several discussions (private journalists briefings, public panels) about the Future of Databases in the Cloud Computing. There are people who really seriously think today that RDBMS will soon disappear because of the Cloud.

I have seen that the same topic has also been discussed at various locations at the same time, including interesting point of view from Martin Fowler.

New technologies for databases can be roughly divided into two groups:
  • New kind of database technologies, tuned by design for the Cloud like Vertica, CouchDB, SimpleDB and other products alike that I've already mentioned in that blog.
  • New deployment and access for RDBMS, known as Database-as-a-Service. Basically, the database is remotely hosted and administrate, but you still access it through SQL over HTTP or SOAP/REST.

Having worked in the past for an ODBMS vendor I know how difficult it is to convince CIOs, project managers, architects, developers and DBAs to move from RDBMS. There is a kind of religion about relational theory. My take is that RDBMS are here to stay, as mainframes did (they never disappeared as it has been predicted by many "experts" in the past). New technologies never replace good old ones, they just complement them.

Anyway, there are tangible impacts of SOA and the Cloud on the database market:

  • We will access more kind of data sources in the future, not only RDBMS and services, but also new kind of databases. Heterogeneity will continue to grow.
  • We will access more data sources in the future, most applications were using single databases, they will now access multiple data sources. We are switching from data access to data integration (I tend to prefer the term adaptive mediation of information). Integration has to be done at a business level, not at the SQL one or XML one.
  • Many advanced features of databases engines (security, fault tolerance, stored procedures...) will progressively move to an intermediate integration layer. Databases, including relational databases, will go back to simple and efficient storage technology.
  • Data integration will become more important than the database itself, databases will be commoditized. Each application development team will be able to select the best database technology for its needs.
  • Accessing non-database data sources will impose to have extended metadata. The relational world is simple because SQL provides a convenient, technical APIs to access data at the atomic level (a cell at the cross between a row and a column). Everything is implicit, in terms of metadata, access patterns, etc. Conversely, accessing a service-oriented data source imposes to explicitly describe its data model and its data manipulation semantic. Services can be either fined-grained or coarse-grained, you need to capture that. Data access has its contribution to the Semantic Web.
  • When thousands of data sources will be available as data services (like mainframe screens, APIs of packaged applications), we will need tools to automatically combine them at runtime. Manual, hard-coded or even visual composition of data services is a choice only when dealing with a few data services. Dynamic composition of data services (e.g. aggregation of fine-grained data services into other larger coarse-grained data services as required by ever changing business functionalities) is imposed by the really agile IT. Otherwise "agile" will turn into "fragile"!
  • Ad-hoc data mashups will require availability of the right data services at the right time. This can only be achieved by platforms being able to dynamically create and publish new data services as they become required.
  • Access to non-structured data will grow. At the same time, non-structured data is on the way to structure itself or at least to describe itself better, see the "Linked Data", "OpenCalais" and the "Web of Data" efforts for instance.
  • Accessing multiple data sources with different latencies will impose to deal with reactive data integration patterns. We will have to support asynchronous data access, and we will need tools for that, because asynchronous and parallel programming are not natural to most developers and architects.

As Martin Fowler concludes, Data services platforms are enabling the promises of SOA, by really favoring small business functionalities having their own storage, instead of sharing data in huge centralized databases.

The Future of Databases

Last week in San Jose, during the Data Services World event, I've been participating to several discussions (private journalists briefings, public panels) about the Future of Databases in the Cloud Computing. There are people who really seriously think today that RDBMS will soon disappear because of the Cloud.

I have seen that the same topic has also been discussed at various locations at the same time, including interesting point of view from Martin Fowler.

New technologies for databases can be roughly divided into two groups:
  • New kind of database technologies, tuned by design for the Cloud like Vertica, CouchDB, SimpleDB and other products alike that I've already mentioned in that blog.
  • New deployment and access for RDBMS, known as Database-as-a-Service. Basically, the database is remotely hosted and administrate, but you still access it through SQL over HTTP or SOAP/REST.

Having worked in the past for an ODBMS vendor I know how difficult it is to convince CIOs, project managers, architects, developers and DBAs to move from RDBMS. There is a kind of religion about relational theory. My take is that RDBMS are here to stay, as mainframes did (they never disappeared as it has been predicted by many "experts" in the past). New technologies never replace good old ones, they just complement them.

Anyway, there are tangible impacts of SOA and the Cloud on the database market:

  • We will access more kind of data sources in the future, not only RDBMS and services, but also new kind of databases. Heterogeneity will continue to grow.
  • We will access more data sources in the future, most applications were using single databases, they will now access multiple data sources. We are switching from data access to data integration (I tend to prefer the term adaptive mediation of information). Integration has to be done at a business level, not at the SQL one or XML one.
  • Many advanced features of databases engines (security, fault tolerance, stored procedures...) will progressively move to an intermediate integration layer. Databases, including relational databases, will go back to simple and efficient storage technology.
  • Data integration will become more important than the database itself, databases will be commoditized. Each application development team will be able to select the best database technology for its needs.
  • Accessing non-database data sources will impose to have extended metadata. The relational world is simple because SQL provides a convenient, technical APIs to access data at the atomic level (a cell at the cross between a row and a column). Everything is implicit, in terms of metadata, access patterns, etc. Conversely, accessing a service-oriented data source imposes to explicitly describe its data model and its data manipulation semantic. Services can be either fined-grained or coarse-grained, you need to capture that. Data access has its contribution to the Semantic Web.
  • When thousands of data sources will be available as data services (like mainframe screens, APIs of packaged applications), we will need tools to automatically combine them at runtime. Manual, hard-coded or even visual composition of data services is a choice only when dealing with a few data services. Dynamic composition of data services (e.g. aggregation of fine-grained data services into other larger coarse-grained data services as required by ever changing business functionalities) is imposed by the really agile IT. Otherwise "agile" will turn into "fragile"!
  • Ad-hoc data mashups will require availability of the right data services at the right time. This can only be achieved by platforms being able to dynamically create and publish new data services as they become required.
  • Access to non-structured data will grow. At the same time, non-structured data is on the way to structure itself or at least to describe itself better, see the "Linked Data", "OpenCalais" and the "Web of Data" efforts for instance.
  • Accessing multiple data sources with different latencies will impose to deal with reactive data integration patterns. We will have to support asynchronous data access, and we will need tools for that, because asynchronous and parallel programming are not natural to most developers and architects.

As Martin Fowler concludes, Data services platforms are enabling the promises of SOA, by really favoring small business functionalities having their own storage, instead of sharing data in huge centralized databases.

Monday, November 10, 2008

Business Objects Data Services

I've recently read several white papers about the new Data Services offer from Business Objects.

Seems to me, it is basically a renaming of their former ETL and Data Quality products.

It is not fundamentally surprizing to see ETL vendors moving to Data Services, as EII vendors did before them.
Let’s say that globally an ETL is a tool to move data from DB1 to DB2, or more exactly extract data from DB1, transform data somewhere (huge debate here) and then load data into DB2.
Now, let’s suppose you replace the third step by “publishing data”, you then have an "ETP" or even a Data Services Platform, if you publish resulting views as Web Services.

Well that's probably still targetting read-only, non real-time data integration, but at least it demonstrates that Data Services are gaining momemtum on the market.

Business Objects Data Services

I've recently read several white papers about the new Data Services offer from Business Objects.

Seems to me, it is basically a renaming of their former ETL and Data Quality products.

It is not fundamentally surprizing to see ETL vendors moving to Data Services, as EII vendors did before them.
Let’s say that globally an ETL is a tool to move data from DB1 to DB2, or more exactly extract data from DB1, transform data somewhere (huge debate here) and then load data into DB2.
Now, let’s suppose you replace the third step by “publishing data”, you then have an "ETP" or even a Data Services Platform, if you publish resulting views as Web Services.

Well that's probably still targetting read-only, non real-time data integration, but at least it demonstrates that Data Services are gaining momemtum on the market.

Saturday, November 8, 2008

SOA social

I found the article mentioned in the previous post on this portal -> SOA Social.
You'll find interesting resources over there.

SOA social

I found the article mentioned in the previous post on this portal -> SOA Social.
You'll find interesting resources over there.

The case for coordinated EDM and SOA

Article by Keith Worfolk in SOA World about the benefits of coordinated strategies for Enterprise Data Management and SOA.

Needless to say that a Data Services Platform should be the beating heart of these coordinated strategies.

I cannot agree more with the first best practice described by the author:
"...When thinking about services, don't forget to consider the data.
Systematically designing a service model is like designing a data model. For either, its impact should be considered long term, and the level of normalization of designed components, services, or data is considered a sign of quality and maturity.


Figure 6 shows service-data normalization from immature to mature organizations:

  • "Wild West": Non-existent or ad hoc and uncoordinated normalization
  • Ownership/Stewardship: Service designs built on data designs
  • Encapsulation: Service and data designs coordinated in development/maintenance initiatives; either may drive the other as long as they are coordinated
  • Object: One and the same service/data designs. Normalized designs are within EIA designs; service implementations take data ownership to another level where master data value is known only in service designs/implementations.
Most organizations pursuing services-data normalization have progressed to ownership/stewardship levels, yet need to reach encapsulation before realizing major benefits in efficiencies, maintenance costs, and asset business value.

The highest level of service-data normalization, object, may not make sense for some organizations, especially where master data or business services change frequently. Depending on their stability, the more possible an object level may be. However, cost/benefit analysis may make encapsulation preferred for some organizations.

Transitioning to advanced service-data normalization is a process of increasing organizational maturity toward coordinated EDM-SOA strategies..."

The case for coordinated EDM and SOA

Article by Keith Worfolk in SOA World about the benefits of coordinated strategies for Enterprise Data Management and SOA.

Needless to say that a Data Services Platform should be the beating heart of these coordinated strategies.

I cannot agree more with the first best practice described by the author:
"...When thinking about services, don't forget to consider the data.
Systematically designing a service model is like designing a data model. For either, its impact should be considered long term, and the level of normalization of designed components, services, or data is considered a sign of quality and maturity.


Figure 6 shows service-data normalization from immature to mature organizations:

  • "Wild West": Non-existent or ad hoc and uncoordinated normalization
  • Ownership/Stewardship: Service designs built on data designs
  • Encapsulation: Service and data designs coordinated in development/maintenance initiatives; either may drive the other as long as they are coordinated
  • Object: One and the same service/data designs. Normalized designs are within EIA designs; service implementations take data ownership to another level where master data value is known only in service designs/implementations.
Most organizations pursuing services-data normalization have progressed to ownership/stewardship levels, yet need to reach encapsulation before realizing major benefits in efficiencies, maintenance costs, and asset business value.

The highest level of service-data normalization, object, may not make sense for some organizations, especially where master data or business services change frequently. Depending on their stability, the more possible an object level may be. However, cost/benefit analysis may make encapsulation preferred for some organizations.

Transitioning to advanced service-data normalization is a process of increasing organizational maturity toward coordinated EDM-SOA strategies..."

More on LINQ to SQL

Some additional comments about the possible end of LINQ to SQL in Julia Lerman's blog.

More on LINQ to SQL

Some additional comments about the possible end of LINQ to SQL in Julia Lerman's blog.

Friday, November 7, 2008

Data Mashups: Enabling Ad-Hoc Composite, Headless, Information Services

ZapThink just released a research paper about Data Mashups.
Extending the notion of mashups, data mashups will decouple data integration from heavy development cycles. But as ZapThink's Ron Schmelzer wrote this requires to have a strong Data Services Layer in place.

I fully agree with the following statements from the research:
"...the IT organization must give Service consumers the tools and methods they need to be able to successfully compose those Services with low cost and risk..."

And this exactly why Dynamic Data Services are so important. Having statically defined, hard-coded (or visually composed) data services could meet the requirements of statically defined service-oriented processes, but the reactive enterprise needs more flexibility. It is important to have the relevant data services available in real-time when data mashups will require ad-hoc data. A good Data Services Platform must support this kind of runtime generation and deployment of ad-hoc data services.

Later the author writes:
"...One of the important benefits of a Data Services layer is that it enables loose coupling between the applications using the Data Services and the underlying data source providers. Loose coupling enables data architects to modify, combine, relocate, or even remove underlying data sources from the Data Services layer without requiring changes to the interfaces that the Data Services expose. As a result, IT can retain control over the structure of data while providing relevant information to the applications that need it. Over time, this increased flexibility eases the maintenance of enterprise applications..."

In a world where most data sources will become service-oriented (even the databases themselves), it is important to be able to really achieve the decoupling between the data services and the data sources. In this particular case, this requires extended semantic metadata around data services, so that an advanced Data Services Platforms can dynamically recompose them at runtime, as requested by new data mashups.

Data Mashups: Enabling Ad-Hoc Composite, Headless, Information Services

ZapThink just released a research paper about Data Mashups.
Extending the notion of mashups, data mashups will decouple data integration from heavy development cycles. But as ZapThink's Ron Schmelzer wrote this requires to have a strong Data Services Layer in place.

I fully agree with the following statements from the research:
"...the IT organization must give Service consumers the tools and methods they need to be able to successfully compose those Services with low cost and risk..."

And this exactly why Dynamic Data Services are so important. Having statically defined, hard-coded (or visually composed) data services could meet the requirements of statically defined service-oriented processes, but the reactive enterprise needs more flexibility. It is important to have the relevant data services available in real-time when data mashups will require ad-hoc data. A good Data Services Platform must support this kind of runtime generation and deployment of ad-hoc data services.

Later the author writes:
"...One of the important benefits of a Data Services layer is that it enables loose coupling between the applications using the Data Services and the underlying data source providers. Loose coupling enables data architects to modify, combine, relocate, or even remove underlying data sources from the Data Services layer without requiring changes to the interfaces that the Data Services expose. As a result, IT can retain control over the structure of data while providing relevant information to the applications that need it. Over time, this increased flexibility eases the maintenance of enterprise applications..."

In a world where most data sources will become service-oriented (even the databases themselves), it is important to be able to really achieve the decoupling between the data services and the data sources. In this particular case, this requires extended semantic metadata around data services, so that an advanced Data Services Platforms can dynamically recompose them at runtime, as requested by new data mashups.

Exalead CloudView

Exalead repositions its offer towards unstructured data integration in SOA and Cloud environments.
http://www.exalead.com/software/news/press-releases/2008/09-24.php
http://www.exalead.com/software/products/cloudview/

Exalead CloudView

Exalead repositions its offer towards unstructured data integration in SOA and Cloud environments.
http://www.exalead.com/software/news/press-releases/2008/09-24.php
http://www.exalead.com/software/products/cloudview/

Steve Mills (IBM) on Information on Demand

It is always interesting to hear what Steve Mills (VP of IBM Software Group) has to say about data in general and his comments about IBM's strategy regarding Information on Demand.

Here ->
http://searchdatamanagement.techtarget.com/generic/0,295582,sid91_gci1337742,00.html

Always good to repeat that Information is data alive, data with a meaning and business value. In an object-oriented world one would say Information is the State part of an object.

Steve Mills (IBM) on Information on Demand

It is always interesting to hear what Steve Mills (VP of IBM Software Group) has to say about data in general and his comments about IBM's strategy regarding Information on Demand.

Here ->
http://searchdatamanagement.techtarget.com/generic/0,295582,sid91_gci1337742,00.html

Always good to repeat that Information is data alive, data with a meaning and business value. In an object-oriented world one would say Information is the State part of an object.

Thursday, November 6, 2008

Entity Framework Futures

http://mschnlnine.vo.llnwd.net/d1/pdc08/WMV-HQ/TL20.wmv, by Tim Mallalieu at PDC 2008.

Interesting and entertaining at the same time.

Abstract
The next version of the Entity Framework adds scenarios in the areas of model driven development, domain driven development, simplicity, and integration. See a preview of production and prototype code for the next version of the Entity Framework as well as a candid discussion with members of the development team.

Entity Framework Futures

http://mschnlnine.vo.llnwd.net/d1/pdc08/WMV-HQ/TL20.wmv, by Tim Mallalieu at PDC 2008.

Interesting and entertaining at the same time.

Abstract
The next version of the Entity Framework adds scenarios in the areas of model driven development, domain driven development, simplicity, and integration. See a preview of production and prototype code for the next version of the Entity Framework as well as a candid discussion with members of the development team.

Wednesday, November 5, 2008

Windows Azure

The new service infrastructure from Microsoft to compete against other Cloud and SaaS offers has been recently announced.

See for instance:

and millions of other articles, news and blog entries related to this product launch.

Let's see how data access will be addressed in this upcoming offer... First answers in Pablo Castro's blog: http://blogs.msdn.com/pablo/archive/2008/11/01/ado-net-data-services-in-windows-azure-pushing-scalability-to-the-next-level.aspx and http://blogs.msdn.com/pablo/archive/2008/10/28/now-you-know-it-s-windows-azure.aspx

Windows Azure

The new service infrastructure from Microsoft to compete against other Cloud and SaaS offers has been recently announced.

See for instance:

and millions of other articles, news and blog entries related to this product launch.

Let's see how data access will be addressed in this upcoming offer... First answers in Pablo Castro's blog: http://blogs.msdn.com/pablo/archive/2008/11/01/ado-net-data-services-in-windows-azure-pushing-scalability-to-the-next-level.aspx and http://blogs.msdn.com/pablo/archive/2008/10/28/now-you-know-it-s-windows-azure.aspx

Developing applications with Data Services

Interesting movie of a Data Services session at the last Microsoft PDC.

Abstract:
TL07 Developing Applications Using Data Services
Presenter: Mike Flasko (Also see his blog).

In the near future, applications will be developed using a combination of custom application code and online building block services, including data-centric services. In this session we discuss advancements in the Microsoft development platform and online service interfaces to enable seamless interaction with data services both on-premises (e.g., ADO.NET Data Services Framework over on-premises SQL Server) and in the cloud (e.g., SQL Server Data Services). Learn how you can leverage existing know-how related to LINQ (Language Integrated Query), data access APIs, data-binding, and more when building applications using online data.

Developing applications with Data Services

Interesting movie of a Data Services session at the last Microsoft PDC.

Abstract:
TL07 Developing Applications Using Data Services
Presenter: Mike Flasko (Also see his blog).

In the near future, applications will be developed using a combination of custom application code and online building block services, including data-centric services. In this session we discuss advancements in the Microsoft development platform and online service interfaces to enable seamless interaction with data services both on-premises (e.g., ADO.NET Data Services Framework over on-premises SQL Server) and in the cloud (e.g., SQL Server Data Services). Learn how you can leverage existing know-how related to LINQ (Language Integrated Query), data access APIs, data-binding, and more when building applications using online data.

SOA Approach to integration

I recently read this post on TSS, where someone claims REST is object-oriented while SOAP would be process-oriented. That's a funny way to compare these approaches, it is not false but SOAP can also be object-oriented if you want, and Data Services are all about that, the difference is that you can manage the level of granularity in data integration, you are not limited to encapsulate any "atomic resources" (whatever it means) with CRUD APIs.

SOA Approach to integration

I recently read this post on TSS, where someone claims REST is object-oriented while SOAP would be process-oriented. That's a funny way to compare these approaches, it is not false but SOAP can also be object-oriented if you want, and Data Services are all about that, the difference is that you can manage the level of granularity in data integration, you are not limited to encapsulate any "atomic resources" (whatever it means) with CRUD APIs.

Data Services at Microsoft TechEd EMEA 2008

DataDirect will be exhibiting at Microsoft Tech·Ed EMEA 2008, taking place at Barcelona's Centre Convencions Internaticional, 10 – 14 November 2008.

In addition, Solutions Architect John de Longa will present “Frontiers in Data Access” on Tuesday, 11 November 2008 from 14:50 to 15:10 in theatre two followed by a second presentation Wednesday, 12 November 2005 at 15:20.

In his presentation, “Frontiers in Data Access” John de Longa will offer technical insight and valuable advice for enterprise, system and data architects as well as application developers and managers. He will discuss how to improve the scalability and flexibility of data access strategies.

“As more organisations implement service-oriented architectures they find themselves with a multitude of business services that need to access enterprise data – too often data access issues are overlooked until they become a problem,” explains John de Longa. “I’ll be exploring the concept of data services as an emerging approach for addressing data challenges in SOA.”
Data services enhance flexibility and simplify application development by providing a consistent mechanism for accessing, integrating and updating enterprise data, regardless of where it is stored.

Data Services at Microsoft TechEd EMEA 2008

DataDirect will be exhibiting at Microsoft Tech·Ed EMEA 2008, taking place at Barcelona's Centre Convencions Internaticional, 10 – 14 November 2008.

In addition, Solutions Architect John de Longa will present “Frontiers in Data Access” on Tuesday, 11 November 2008 from 14:50 to 15:10 in theatre two followed by a second presentation Wednesday, 12 November 2005 at 15:20.

In his presentation, “Frontiers in Data Access” John de Longa will offer technical insight and valuable advice for enterprise, system and data architects as well as application developers and managers. He will discuss how to improve the scalability and flexibility of data access strategies.

“As more organisations implement service-oriented architectures they find themselves with a multitude of business services that need to access enterprise data – too often data access issues are overlooked until they become a problem,” explains John de Longa. “I’ll be exploring the concept of data services as an emerging approach for addressing data challenges in SOA.”
Data services enhance flexibility and simplify application development by providing a consistent mechanism for accessing, integrating and updating enterprise data, regardless of where it is stored.

Data Services World 2008 in San Jose

The second issue of Data Services World will be held in San Jose on November 20th 2008. Once again, DataDirect will be the main sponsor of this event.
Rob Steward, our VP engineering will present the New Frontier for Data Services and I will participate to the power panel.
This is a great event for our technologies and I will be happy to meet with you over there and discuss the trends in Data Services.

Data Services World 2008 in San Jose

The second issue of Data Services World will be held in San Jose on November 20th 2008. Once again, DataDirect will be the main sponsor of this event.
Rob Steward, our VP engineering will present the New Frontier for Data Services and I will participate to the power panel.
This is a great event for our technologies and I will be happy to meet with you over there and discuss the trends in Data Services.

LINQ to Entities and LINQ to SQL

Eventually Microsoft decided to focus on LINQ to Entities for the next .NET 4.0.
See the ADO.NET blog and this entry.

This is a great decision, because their Data Access offer was much more than confusing to users, with too many ways to access the same data and too much overlaps between their different technologies.

The good news is that Microsoft recognizes the importance of having an intermediate Business Model to integrate data. This is a great milestone for the whole software industry!

LINQ to Entities and LINQ to SQL

Eventually Microsoft decided to focus on LINQ to Entities for the next .NET 4.0.
See the ADO.NET blog and this entry.

This is a great decision, because their Data Access offer was much more than confusing to users, with too many ways to access the same data and too much overlaps between their different technologies.

The good news is that Microsoft recognizes the importance of having an intermediate Business Model to integrate data. This is a great milestone for the whole software industry!