First my summaries of opinions, then links to my comments here and there.
Some of my opinions on a RESTful Client Engine, and what types of server-side changes would NOT break a client:
- URIs are discovered (except the first)
- Server-provided data extensions (hidden fields with defaults in a form) are treated like "does not understand" but still submitted properly
- Changes to which HTTP verb are used. The server can swap PUT for POST without issue (verb is discovered in form not spec'ed in a schema)
- Changes in state path. "checkout" could directly be a single form with POST, a GET link to a single for with POST, a POST to a form with reliable PUT semantics, .......
- Sam Ruby's suggestions for HTML5 Distributed Extensibility is a fabulous starting point.
- URI Templates, HTML Forms, XForms, Web Forms, WADL
- XSD is being extended to better support versioning
- Data schemas designed for extensibility should allow everything possible to be optional
- Microformats enable opportunistic clients to machine process
- GRDDL leaps from microformats, xml, some json up to RDF
- HTML+Microformats can do a ton of this already, but with no machine processable anything
- Prediction: in 10 years all of this this will have exploded/merged into one or a few really cool and evolvable data/interaction schema systems
- from Stu Charlton comes a reference to interaction machines
- from Todd Hoff comes a reference to two NASA reference on Mision Planning and Closed Loop Execution and Recovery
- papers from Luis Caires, for example Spatial-Behavioral Types, Distributed Services, and Resources
(forget where I found this material from... can't understand it yet...)
The following is a guide to my recent comments on these threads from around the web:
My comments is that clients should be opportunistic: if they understand some shared semantic (like a RESTful shopping API or task manager) then they can automate some interactions.
Should this idea be extended to the rest of non-user facing resource-oriented applications? I don't think so. Here is why.The idea of hypermedia embedding all the action controls necessary to interact with the server works well for an arbitrary number of universal clients interacting with a given server. In this case, the server offering a set of resources specifies all the ordering/interaction rules within the representation. Most application clients, on the other hand, interact with more than one server, and the ordering constraints can not be set by any given server. The clients know how to compose applications out of resources offered by various servers, and each client needs to be able to exercise control over composition. To be able to exercise such a control, client applications can not be universal, and the benefits that John lists above cannot be completely realized.
On JJ Dubray's blog I make several comments.
In response to "REST creates strong coupling", I say:
- JJ is ignoring the shared definitions of MIME types
- The globally shared "Provider external" (Pe) semantics in REST are (URI, GET representation, hyperlinks)
- in WS-* the Pe globally is 0 (zero), only particular shared uses have a shared semantic.
- For me REST in the enterprise isn't about scalability, but rather independent evolution and support for partial failure.
- In maybe 10 years there will be an XML schema language that properly supports versioning and extensibility
- I side with Patrick Mueller: there is something of value in more than just prose to document RESTful systems
- Hypermedia MIME types are _more_ than just a data schema (embedded forms to signify actions)
- A list of RESTful interactions that _shouldn't_ break a programmed client (listed below)