After missing Lotusphere for the first time last year, I'm glad to be back again.
My employer didn't send anyone from our team this year, but I decided to take the time off and come on my own. I'll be working at the OpenNTF booth in the product showcase.
As usual, the flight down was a reunion in it's own right, with Mary Beth Raven and IBMers Dave Kern and Mark Vincenes on the same flight. I'm staying at the All-Star Resort so I haven't walked into the Dolphin lobby for the second phase of the reunion yet, Actually, since my wife and I have scheduled some "Us Time" at EPCOT on Saturday and Universal Islands of Adventure on Sunday, I may not get to the Dolphin until just before the Sunday night opening party.
If Jeff Jarvis' name isn't immediately familiar, you may recall Ed Brill's post from about a year ago, highlighting Jeff's about-face after his webcast had made fun of Howard Stern's use of Lotus Notes. Now, in an article on Huffington Post, Jeff Jarvis says about the Circles feature of Google+, "We don't come to social services to hide secrets; that would be idiotic. We come to share." He goes on to say "600 million people can't be wrong. We are sharing a billion things a day on Facebook alone because we want to, because we find value in it. That's where the discussion should begin, with the power of publicness, not with the presumption of privacy. "
This is a superficial and short-sighted view. 600 million people may not be wrong, but are they as satisfied as they could be? Are they sharing as much as they could be? What is their major complaint about Facebook, if not privacy?
Almost ten years ago, I wrote an article on notes.net about field encryption in Lotus Notes. In the first two paragraphs, I talked about why the presumption of privacy was such a crucial aspect of the early success of Lotus Notes:
Since its first release more than ten years ago, Lotus Notes has been the premier solution for sharing information in a corporate environment. From day one, the developers at Iris Associates realized that for sharing to be successful, there had to be a strong security system in place so that sharing could be limited.
That sounds a bit paradoxical, but it reflects a basic reality: users won't put information into a system if they don't trust that the system will only give that information to the right people. One of the key concepts that the developers of Notes understood was that although programmers and system administrators are very cool, upstanding, and important people, some users don't necessarily want to have to trust them with all their information.
This principle applies just as much to public systems as it does to corporate systems. It applies just as much to social networks as it did to Lotus Notes. It is a fundamental principle for all social software. It's certainly nothing new.
Early in my career, way back before there was even a Lotus Notes 1.0, when private email systerms were the state-of-the-art in information sharing, I stopped shipment of a new releaase of a successful corporate email system because of a single bug report from one alpha tester of one message that had been delivered to the wrong recipient. I spent the next three weeks in a lab with two colleagues working to develop a theory about how it had happened (it was due to an unsigned value being treated as signed, believe it or not), build a test environment to prove the theory and reproduce the problem at will (that was actually the really hard part!), implement a fix, and verify it. Management agreed to stop shipment without any estimate of how long it would take to find and fix the problem because they knew the repercussions of a loss of trust in privacy of information.
600 million Facebook users aren't wrong, but they are not sharing as much as they could be willing to share, and Google knows this. It's a smart move for Google because it recognizes that there is information that we want to share with family, there is information we want to share with professional colleagues, there is information that we want to share with classmates, and there have been no substantial advances in meeting that need since the invention of email lists. Google knows that Facebook is phenomenally successful but has a problem because it is set up for over-sharing, and that contributes to under-sharing. Google knows that this is a weakness that can draw users to Google+ if Facebook doesn't respond. Of course, Google+ doesn't go as far as to implement end-to-end encryption so it's not going to be as trusted as it could be either, but it's a public system and that level of security would be counter to Google's goal of exploiting the data to generate advertising revenue. It's still a step in the direction of more sharing, and it's disconcerting that Jeff Jarvis doesn't understand or value this.
It is generally, though probably not universally, accepted that robust interfaces are a good thing. This was expressed in RFC-761 as "be conservative in what you do, be liberal in what you accept from others," which is frequently referred to as Postel's Law or The Robustness Principle. Robustness, however good it is in principle, is not always as easy to deal with in practice. This is especially true for people who are integrating systems built from components that have different ideas about how liberal they should be. I face this issue pretty regularly when dealing with messages that the Lotus Domino SMTP server has accepted from non-Domino systems and delivered to users. Domino can be very liberal about illegal formatting of inbound headers and MIME content, storing them as-is and making it the next guy's problem to figure out what to do with them. And while the Notes client often does manage to do something sensible with non-conformant data, some of industry-standard APIs that 3rd party developers use to parse such messages can be much less forgiving.
So... robustness is good, but inconsistent robustness across different systems... not so much. And inconsistent robustness within one API set? That's what I'm really here to talk about!
Let's look at two lines of code that each call the Domino back-end classes. Here's the first one:
Except for the semicolon at the end, it doesn't matter if this is Java or LotusScript. The sharp-eyed amongst you might notice something else that could matter, but in fact it doesn't. In both languages, if the db object is properly declared and instantiated, and the server "svr1/rhs" is running, reachable and accessible, and if the named database file exists and is accessible, this call works.
Now, let's make a small change to the code:
This code also still works, regardless of whether it is Java or LotusScript, and with the same stipulations I stated above. But this code is intended for a clustered environment, so what happens if we remove the stipulation that svr1/rhs is running, but add the stipulation that it is a member of a Domino cluster and another server in the cluster (call it "svr2/rhs") has a replica of the same database on it? That's where an inconsistency occurs, but in LotusScript only. The Java code works as expected, opening the database on the second server in the cluster, but the LotusScript code fails.
This happens because there's an error in the LotusScript syntax: the double backslash is really a double backslash, not an escaped single backslash the way it is in Java. When LotusScript passes the pathname with the double backslash to the Notes back-end classes, it passes them along to the Notes C API, which passes it on to the robust APIs for the filesystem on a Domino server running on Windows, which ignore the extra backslash. (I'm pretty sure this is also true on most of the other Domino platforms, if not all, but I haven't verified it.) That's why the first code example worked despite the error in syntax, and that's why openWithFailover works as long as the target server is up and running. But when the target server is down, openWithFailover tells the Notes C API to find out what cluster it is in and find another server in the cluster with a replica of the database, and it turns out that something in that process is not as robust. The extra backslash causes a lookup to fail, and no failover occurs.
This, by the way, is why I asked a few days ago whether the openWithFailover method actually works. The code I was testing it with was written in LotusScript, and it had double backslashes in the pathname. IBM support helped me figure this out. I feel a bit embarassed, but then again not.
I should mention, in case it's not clear, that I'm not saying there is an inconsistency between LotusScript and Java. There's a syntax difference, and that is certainly expected. The inconsistency I'm talking about is the handling of a double backslash in the open and openWithFailover methods. The Java code above never sends double backslashes because the Java language has already resolved them to a single backslash, so it's not an issue... but it is certainly possible to write Java code that really does send double backslashes. One probably wouldn't code it directly, as in "mail\\\\user.nsf", but it could easily look like this:
String folder = doc.getItemValueString("folder"); ... ... db.openWithFailover("svr1/rhs", folder + "\\" + file");
If the folder item that was read into the folder variable already had a backslash character at the and of its value, then this Java code would get the same inconsistent result as the LotusScript. It would work fine as long as svr1/rhs is up, just as it would work if it were just a call to db.open. But if svr1/rhs was down, this Java code would fail. This type of code, by the way, is the reason that filesystem APIs tend to be forgiving of extra slashes or backslashes.
There's no question that the double backslash in LotusScript is an error. It is, however, a pretty predictable error. It certainly is for me, given that three of the four languages that I deal with on a regular basis (Java, @formula, and C) all require escaping backslashes. Filesystem APIs generally consider this to be an innocent error, The Notes classes usually end up treating it as an innocent error, too, but not always.
I have to confess that, for the longest time, I had no idea that an ordinary NoteDatabase.Open() call is not cluster-aware. Clustering in Lotus Notes and Domino, after all, is magic, right? But the ordinary open(server,path) method in LotusScript and Java back-end classes does not fail over if the specified server is not responding There is a special openWithFailover(server,path) method, and that's what you're supposed to use if you want failover to occur. I can see the logic in this, as there are many cases where you really do have to carefully control what server you are working with.
So a few weeks ago I started thinking about upgrading some code to use openWithFailover, but this code runs in an unusual configuration (scheduled agent running on a server in domainA accessing databases on servers in domainB), so I decided to do some methodical testing, using a stripped down piece of test code. Before testing this code in the final configuration, however, I figured that I should test it in a simple configuration and prove a few things. After all, it would be nice to know the code doesn't have bugs before I try it in an environment where I'm not particularly confident that it will work.
First, of course, I had to build a cluster. Nothing could be simpler, really, but it had been a few years since I've done it so I took two servers, one on 8.5.1 and one on 8.5.2, and just followed the instructions in the Domino Administrator help file. Half an hour later I had a cluster, with cluster replication doing it's thing, and failover occurring as expected on a Notes 8.5.2 client when I took either one of the servers down. Then I wrote a few lines of LotusScript and ran it as a client-side agent from the Actions menu. The code instantiated three NotesDatabase objects. It used the open() method on the first object to open a test database on one of the servers in the cluster. It used the open() method on the second object to open a replica of the same database on the other server. And it used the openWithFailover() method to open the same database again on the first server. In all three cases, the code called the NotesDatabase.isOpen() method to verify success, and when both servers were up the results were exactly as expected -- all three databases opened. But when the first server was down, the isOpen() test failed after both the first open() and the openWithFailover() call failed.
Just to be sure, I stripped down the code even further, so it just instantiates one single NotesDatabase object and uses the openWIthFailover() method, then tests isOpen(). Again, it works fine when the sever is up, but does not fail over to the second server in the cluster when the first server is down. I double and triple-checked everything but found no problems with either my cluster or my code. My client configuration is not an issue as far as I can see. I can connecto to both servers in the cluster, and the fact that both regular Open() calls do succeed proves that connectivity isn't a problem in the agent. So, a few days ago I opened a PMR with IBM and demonstrated the problem to them in a screen-sharing session. We checked a few things, like the cluster.ncf file, and then they asked me to send in my code. Today the support engineer informed me that he is getting the same results in his own test environment.
At this point, I just have to wonder... Has anyone actually observed OpenWithFailover() working?
Engadget has a video demo of the new HP TouchPad. I really like my Android phone, with it's wide selection of apps, so I'm going to defer judgment until I find out what's going to be available for the TouchPad, but it sure does look tempting.
For anyone with loads of time on their hands (almost 2 hours), here's a video of the full product announcment event.
I'm not there. It's so strange for me to not be at Lotusphere. The only person who is happy about this is my wife, especially given the 15 to 20 inch dumping of snow we're expecting to be ending right about the same time as all of Lotusphere is heading off to Harry Potter land.
It's actually not the first time that I've missed being at the OGS, though. I had an apparent bout of food poisoning on Sunday in 2006, and skipped the welcome party and watched the OGS from my room in order to conserve strength so I could start getting back into the swing of things in the afternoon. I have to say that this year's stream, supplemented by twitter coverage, was much better. There were just a few momentary audio drop-outs, The counter on the site showed over 1200 people watching the live stream at its peak, and I give IBM a lot of credit for getting the technology right. so that those of us who couldn't be there could share a bit of the experience.
Now I'm scanning the posts from various bloggers who have reported on the OGS. I haven't read them all yet, and there are probably many more coming, but what I've seen so far is confirming my first and strongest impression of the OGS: Nobody is reporting on Project Vulcan per se, and that is confirming my ipression that there seems to be a gap in continuity of message from last year. I don't think Vulcan was particularly well understood by a lot of people last year, at first. After taking in all of Lotusphere 2010 and letting it digest for a few days, I wrote "I see Vulcan as the logical continutation of what IBM was doing with activities," Ed Brill reacted positvely to my analysis, and he noted how the community had been going through a process of "iterative thinking" about it, trying to understand it.. I was hoping that IBM's message this year would have done a better job of explaining Vulcan, and pointing out how it is moving from vision to reality, but I didn't see that in the OGS.
Don't get me wrong. Vulcan was clearly there in the OGS, but having been to so many Lotuspheres, I tend to think of them building one upon another, like chapters in an unfolding story. It seems to me that in constructing the OGS for one year, continuity from the major themes of the previous year should be a major goal. That continuity should be explicit, and not some implicit clues left as a puzzle for the community to figure out. Explicitly following on last year's themes would certainly be in keeping with the "smooth sailing, full speed ahead" metaphor that I used in my post last year, which I think is pretty consistent with what IBM wants to convey to enterprise customers.
I know that there's only so much time in the OGS, and they went over by a good bit this year, but Vulcan really was the major future-oriented takeaway last year, and in light of that I think it just wasn't called out enough in the OGS this year. There was a lot of other stuff in the OGS this year that got a lot more time than Vulcan, and IMHO contributed a lot less to what the audience came for. Vulcan wasn't re-explained, or even summarized for those who weren't there last year. It wasn't clarified for those who didn't fully grasp its importance last year. And words like this were not heard: "Last year we introduced you to our vision of Project Vulcan. Here's what that vision has evolved into this year, here are the products and tools that you are seeing this year that are delivering on the Vulcan vision, and here's what we expect this all to be evolving toward when you come back in 2012 and 2013.".
I imagine that people who go to the right breakouts will get something like this, I hope it emerges clearly in the blogs, because Vulcan is there but, like I said, I'm not there.
Sadly, my streak of consecutive Lotusphere conferences in Orlando is about to come to an end. My employer didn't buy a sponsorship last year, but I managed to get approval to attend. This year we're also not sponsoring, and I didn't ask for approval because I felt that some of my colleauges who didn't get to go last year should get their turn. I did put in an abstract for a session, but I felt it was a long-shot, and I was right. (Note: no sour grapes should be read into this. The nature of the work I do now is such that I'm not keeping up with the latest cool stuff that makes for the most compelling sessions.) I could have decided to pay my own way, and I kept that possibility open as long as I could -- even past the end of early-bird discounts, but let's just say that I have other priorities for the money.
So I won't be there. It will feel very strange to be somewhere other than Orlando that week, but the world won't come to an end.
DRM would have been bad for WikiLeaks.
If it worked, that is.
It does seem to me that discussions of the legality or illegality of what the publishers have done, what the participants in WikiLeaks have done, and of what the individual who allegedly leaked all the information did, are all really of secondary importance. The big issue is this: How is it possible that, with all the smart crypto and security experts at the disposal of the US government, such a large and diverse batch of classified data was made available in plaintext to one person without setting off alarms before it could be leaked? In this day and age, it seems to me that the biggest failure -- and the one that is most likely to go un-punished -- is that the expansion of both the amount of classified data and the number of people with clearances, has clearly and foreseeably exceeded the capabilities of the US government to effectively manage the human and physical elements of the system, but the government hasn't implemented a DRM system. It's not hard to conceive of how such a system would be implemented to provide convenient and reliable access to individual authorized users for specific documents while still providing strong protection for large batches of documents, No... it's not simple by any means, but work on such a system should have started a decade ago.