Skip to main content

OASIS Service Oriented Architecture Reference Architecture Face to Face

And if you make it past the title ...

This week we had a face-to-face meeting of the RA group. About 10-12 people popped in at some time during the meeting, with a hard core of 7 or 8.

I think that the work is still not at the Jello stage, but we are beginning to get there.

This is not yet reflected in the written work, but there will be three main sections, each of which represents a major viewpoint on the reference architecture:

  1. Business as Service view
  2. Realizing Service Oriented Architecture
  3. Owning Service Oriented Architecture
The first view focuses on the how people fit into the SOA; the second focuses on how you put one together, and the third focuses on keeping one going.

I am particularly happy with this kind of breakdown, as I believe it reflects customers' true concerns.

Popular posts from this blog

Comments Should be Meaningless

This is something of a counterintuitive idea: Comments should be meaningless What, I hear you ask, are you talking about? Comments should communicate to the reader! At least that is the received conventional wisdom handed does over the last few centuries (decades at least). Well, certainly, if you are programming in Assembler, or C, then yes, comments should convey meaning because the programming language cannot So, conversely, as a comment on the programming language itself, anytime the programmer feels the imperative to write a meaningful comment it is because the language is not able to convey the intent of the programmer. I have already noticed that I write far fewer comments in my Java programs than in my C programs.  That is because Java is able to capture more of my meaning and comments would be superfluous. So, if a language were able to capture all of my intentions, I would never need to write a comment. Hence the title of this blog.

Sub-turing complete programming languages

Here is an interesting intuition: the key to liberating software development is to use programming languages that are not, by themselves, turing-complete. That means no loops, no recursion 'in-language'. Why? Two reasons: any program that is subject to the halting problem is inherently unknowable: in general, the only way to know what a turing-complete program means is to run it. This puts very strong limitations on the combinatorics of turing-complete programs and also on the kinds of support tooling that can be provided: effectively, a debugger is about the best that you can do with any reasonable effort. On the other hand, a sub-turing language is also 'decidable'. That means it is possible to predict what it means; and paradoxically, a lot easier to provide a rich environment for it etc. etc. An interesting example of two languages on easier side of the turing fence are TeX and CSS. Both are designed for specifying the layout of text, TeX is turing complete and CSS

On programming languages and the Mac

Every so often I dig out my Xcode stuff and have a go at exploring developing an idea for Mac OS X. Everytime the same thing happens to me: Objective-C is such an offensive language to my sensibilities that I get diverted into doing something else. All the lessons that we have learned the hard way over the years -- the importance of strong static typing, the importance of tools for large scale programming -- seem to have fallen on deaf ears in the Objective-C community. How long did it take to get garbage collection into the language? I also feel that some features of Objective-C represent an inherent security risk (in particular categories) that would make me very nervous to develop a serious application in it. As it happens, I am currently developing a programming language for Complex Event Processing. Almost every choice that I am making in that language is the opposite to the choice made for Objective-C -- my language is strongly, statically typed; it is designed for parallel exe