Using "Verboten" Property Names in Custom Controls

Sun Nov 02 09:46:28 EST 2014

Tags: xpages

In an attempt to save you from yourself, Designer prevents you from naming your custom control properties after SSJS keywords such as "do" and "for". This is presumably because a construct like compositeData.for would throw both a syntax error in SSJS and the developer into a tizzy. However, sometimes you want to use one of those names - they're not illegal in EL, after all, and even SSJS could still use compositeData['for'] or compositeData.get("for") to access the value.

Fortunately, this is possible: if you go to the Package Explorer view in Designer and open up the "CustomControls" folder of your NSF, you'll see each custom control as a pair of files: an ".xsp" file representing the control markup and an ".xsp-config" file representing the metadata specified in the properties pane, including the custom properties. Assuming you attempted to type "for" for the property name and were stuck with "fo", you'll see a block like this:

<property>
	<property-name>fo</property-name>
	<property-class>string</property-class>
</property>

Change that "fo" to "for" and save and all is well. You'll be able to use the property just like you'd expect with a normal property, with the caveat above about how to access it if you use SSJS. I wouldn't make a habit of using certain keywords, such as "class", but "for" is perfectly fine and allows your controls to match stock controls such as xp:pager.

This came up for me in one of the controls I like to keep around when dealing with custom renderers: a rendererInfo control to display some relevant information. Since I keep forgetting where I last used such a control, I figure I should post it here partially for my own future reference.

<?xml version="1.0" encoding="UTF-8"?>
<xp:view xmlns:xp="http://www.ibm.com/xsp/core">
	<table>
		<tr>
			<th>Client ID</th>
			<td><xp:text><xp:this.value><![CDATA[#{javascript:
				var comp = getComponent(compositeData['for']);
				return comp == null ? 'null' : comp.getClientId(facesContext);
			}]]></xp:this.value></xp:text></td>
		</tr>
		<tr>
			<th>Theme Family</th>
			<td><xp:text><xp:this.value><![CDATA[#{javascript:
				var comp = getComponent(compositeData['for']);
				return comp == null ? 'null' : comp.getStyleKitFamily();
			}]]></xp:this.value></xp:text></td>
		</tr>
		<tr>
			<th>Component Family</th>
			<td><xp:text><xp:this.value><![CDATA[#{javascript:
				var comp = getComponent(compositeData['for']);
				return comp == null ? 'null' : comp.getFamily();
			}]]></xp:this.value></xp:text></td>
		</tr>
		<tr>
			<th>Renderer Type</th>
			<td><xp:text><xp:this.value><![CDATA[#{javascript:
				var comp = getComponent(compositeData['for']);
				return comp == null ? 'null' : comp.getRendererType();
			}]]></xp:this.value></xp:text></td>
		</tr>
		<tr>
			<th>Renderer Class</th>
			<td><xp:text><xp:this.value><![CDATA[#{javascript:
				var comp = getComponent(compositeData['for']);
				var renderer = comp == null ? null : comp.getRenderer(facesContext);
				return renderer != null ? renderer.getWrapped().getClass().getName() : 'N/A'
			}]]></xp:this.value></xp:text></td>
		</tr>
	</table>
</xp:view>

CocoaLove Reflection

Sun Oct 26 20:13:10 EDT 2014

Tags: cocoa

This weekend, I attended CocoaLove, a new Mac/iOS-development-related conference held in Philadelphia. Though my Cocoa resume consists of doing various tutorials every few years for the last decade or so, the location, concept, and speaker lineup were impossible to resist.

The upshot: this was a great conference. As the tagline – "A conference about people, not tech." – indicates, the sessions weren't technical or even generally about programming as such. Instead, it was a bit more in the ATLUG Day of Champions vein. They covered a range of useful "surrounding" topics, from self-image, to lessons from other industries, to diversity (in a far more interesting sense than that semi-buzzword makes it sound). The secondary push of the conference was social-in-the-sense-of-socializing - the keynote encouraged everyone to introduce themselves and the tables were stocked with levels-of-introversion pins, something that could be a silly conceit but worked well.

In fact, the socializing push worked remarkably well, thanks in large part to the nature of the talks. Since it was a single-track conference and the topics weren't technical reference material, laptops were almost entirely sheathed the whole time and even phone-checking was shockingly limited. Since the event was in a single room, there was no walking around needed between sessions - the breaks were spent talking about the just-presented topic or getting to know the people sitting with you.

This was also personally a very interesting experience for me. When it comes to Cocoa development, I am but an egg. It was weird being back in the position of not being known by anyone and only knowing a few people by their works and reputation – it was like my first MWLUG a couple years ago. I had a bit of "I got to meet Marco Arment and Brent Simmons!" fanboy-ism, but mostly it was great meeting a whole slew of people in a community I've only ever observed from the outside. It also made me realize that I need to get over the hump of the train ride and watch for more events in the city.

For reference, as you'd probably expect, nobody had any idea what "IBM Domino" is other than one long-former IBMer. The reactions I got when I explained that I do Java development all day ranged from "ah, I've used that for some Android development" to the sort of sympathetic reaction you'd get if you told someone you were just evicted from your house.

On a final note, the conference badges were amazing. They were all hand-drawn renditions of attendees' Twitter-or-otherwise avatars and it was an unexpected cool touch. The Fracture (one of the sponsors) prints they threw in were a nice bonus.

A Welcome SSL Stay of Execution

Tue Oct 21 17:52:58 EDT 2014

Tags: ssl

As you likely know from the torrent of posts on Planet Lotus on the topic, IBM announced a hopefully-imminent pair of updates to cover the two main SSL issues that have come to the fore recently: lack of SHA-2 support and the POODLE vulnerability in SSLv3. This is welcome indeed!

Personally, I'm going to stick with the nginx approach for HTTP, even in simple setups, because I've found the extra features you can get (and the promising new ones I haven't tried) to be a dramatic improvement in my server's capabilities. But in the mean time, I'm pleased that the pressure to investigate proxies for other protocols is lessened for the time being. It's not a full SSL revamp (the technote only mentions TLS 1.0 for Domino), but it's something to calm the nerves.

Nonetheless, it's been a good experience to branch out into better ways of running the server. I expect I'll eventually look into mail and LDAP proxying, both to get the highest level of SSL security and to see how useful the other features are (mail load balancing and failover, in particular, would be welcome in my setup).

Some Notes on Developing with the frostillic.us Framework

Thu Oct 09 19:23:04 EDT 2014

Tags: framework

Now that I have a few apps under my belt, I've been getting a better idea of the plusses and minuses of my current development techniques - the frostillic.us Framework combined with stock controls + renderers. This post is basically a mostly-unordered list of my overall thoughts on the current state.

  • Component binding is absolutely the way to go. This pays off in a number of ways, but just knowing that the component is pointed unambiguously at a model property - and thus getting its field type and validators from there - feels right.
  • Similarly, externalizing all strings for translation via a bean or via component binding is definitely the way to go. The "standard" way of adding translation promises the ability to not have to think about it until you're ready, but the result is more of a drag.
  • On the other hand, having to manually write out translation lines for every model property and enum value (e.g. model.Task$TaskStatus.INPROGRESS=In Progress) is a huge PITA. Eventually, it may be worth writing a tool to look for model objects in a DB and present a UI for specifying translations for each property and enum value.
  • It feels like there's still too much domain knowledge required for using Framework objects. Though I try to stick with standard Java and XSP idioms as much as possible, you still have to "just know" and remember classes like BasicXPageController, AbstractDominoModel (and that you should make a AbstractDominoManager inside it), and AbstractXSPServlet. This may be largely unavoidable - Java isn't big on implied and generated code without work. But there's enough specialized knowledge that even I've forgotten stuff like how I added support for the @Table annotation for models to set the form.
    • Designer plugins could help with this, providing ways to create each class type with pre-made Java templates and potentially adding views of each class type. I don't know if I want to bother doing that, though.
  • @ManagedBean is awesome and makes the faces-config.xml method seem archaic (even in light of my recent dabbling with the editor). The down side I can think of is that you don't have a good overview of what the beans in your app are, but that doesn't really come up as a need in reality.
  • The Framework and renderers work great in XPiNC. Good to know, I suppose. They are probably actually a huge boost, speed-wise, over putting a lot of code and resources in the NSF - they dramatically cut down on the network transactions required to render a page.
  • Sticking with stock+ExtLib controls is mostly great. Combined with component binding, my XPages are svelte, I have comparatively few custom controls, and I haven't had to go through the laborious process of writing controls in Java.
  • On the other hand:
    • Trying to write a Bootstrap app with standard components leaks like a sieve. Though there are many times when the controls match up perfectly - xe:formTable, xe:forumView, etc. - the stock controls have no concept of Bootstrap's column layout, so I've had to write custom controls for that.
    • Some ExtLib controls are surprisingly rigid, lacking attrs properties or having weird interaction models like the onItemClick event with context.submittedValue on trees. I guess I could add them myself, but I don't want to get into the business of maintaining a forked ExtLib.
    • Trying to adapt standard code to Bootstrap+jQuery can be a huge PITA. For example, Select2 (and Chosen) don't trigger onchange event handlers written in XSP. While there are ways to work around it, they involve writing odd code that makes big assumptions about the final rendering, and avoiding that is the whole point. I have a "fix" that sort of works, but it's not ideal - it has the side effect of triggering a too-much-recursion exception on the console. There's a bunch of this sort of thing to deal with.
  • My model framework has been serving me extremely well, but some aspects feel weirder over time (like how it blurs the distinction between getting a collection vs. individual model by key). I'm still considering switching to Hibernate OGM, but adapting it would likely be a mountain of work and I'm not 100% sold on its model. Still, the idea of moving to a "real" framework is appealing.
  • Using enums for fixed-value-set model properties is great.
  • I don't yet have a good solution for multi-lingual data. Maybe I could use a convention like "fieldname$fr". It hasn't actually cropped up, though, so it's a theoretical issue.
  • I should standardize the way I do configuration and come up with a standard "keywords" mechanism.
  • Similarly, I should codify the bean-backed table into a control, since I use this all the time and end up with similar code all over the place.
  • I should add specific support for using model objects as properties on other objects - both referenced by ID and potentially beans stored via MIME in the documents.
  • I need to add a way to specify error messages in the model translation file. Currently it's not much better than this.
  • I should really add built-in Excel exporting for collections.
  • I'm going to be happy I did the REST services down the line.

Overall, it mostly feels right, and working on "normal" apps feels archaic and brittle by comparison.

Building an App with the frostillic.us Framework, Part 7

Tue Oct 07 21:00:41 EDT 2014

  1. Jul 09 2014 - Building an App with the frostillic.us Framework, Part 1
  2. Jul 11 2014 - Building an App with the frostillic.us Framework, Part 2
  3. Jul 17 2014 - Building an App with the frostillic.us Framework, Part 3
  4. Jul 17 2014 - Building an App with the frostillic.us Framework, Part 4
  5. Jul 21 2014 - Building an App with the frostillic.us Framework, Part 5
  6. Jul 23 2014 - Building an App with the frostillic.us Framework, Part 6
  7. Oct 07 2014 - Building an App with the frostillic.us Framework, Part 7

Well, it's been much longer than planned, and this topic isn't actually particularly groundbreaking, but the series returns!

  1. Define the data model
  2. Create the view and add it to an XPage
  3. Create the editing page
  4. Add validation and translation to the model
  5. Add notification to the model
  6. Add sorting to the view
  7. Basic servlet
  8. REST with Angular.js

One of the edge features of the Framework is that it assists in writing DesignerFacesServlet servlets - which are sort of like XAgents but written directly as Java classes, without an XPage component.

Before I explain how they work in the Framework, there's a caveat: these servlets do not have (reliable) sessionAsSigner access. The reason for this is that IBM's mechanism for determining the signer doesn't cover the case of just having a Java class. That said, it does have access to the rest of the XPages environment, including the same instances of managed beans available to XPages.

With that unpleasantness aside, here's an example servlet:

package servlet;

import javax.faces.context.FacesContext;
import javax.servlet.ServletOutputStream;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;

import frostillicus.xsp.servlet.AbstractXSPServlet;

public class ExampleServlet extends AbstractXSPServlet {
	@Override
	protected void doService(HttpServletRequest req, HttpServletResponse res, FacesContext facesContext, ServletOutputStream out) throws Exception {
		out.println("hello");
	}
}

Once you create that class (in the package "servlet"), it is available as "/foo.nsf/xsp/exampleServlet". As with an XPage, you can add arbitrary stuff after the servlet name with a "/" and in the query string. Unlike in an XPage, the servlet name is not removed from the path info. So, for example, this method:

protected void doService(HttpServletRequest req, HttpServletResponse res, FacesContext facesContext, ServletOutputStream out) throws Exception {
	Map<String, String> param = (Map<String, String>) ExtLibUtil.resolveVariable(facesContext, "param");
	out.println("param: " + param);
	out.println("pathInfo: " + req.getPathInfo());
}

...with this URL fragment:

foo.nsf/xsp/exampleServlet/foo?bar=baz
...results in this in the browser:
param: {bar=baz}
pathInfo: /xsp/exampleServlet/foo

By default, the result is served as text/plain, but you can change that as usual, with res.setContentType(...).

For most apps, a servlet like this isn't necessary. And for apps that do have a use for servlets, the XAgent and ExtLib-control routes may be more useful. Nonetheless, I've found a number of uses for these, and I appreciate that I don't have a bunch of extra non-UI XPages cluttering up the list.

NotesIn9 Appearance: Custom Renderers

Thu Oct 02 21:16:20 EDT 2014

In a bout of unintentional timing, Dave Leedy posted an episode of NotesIn9 I recorded about custom renderers. This should pair nicely with my last post - the video provides an explanation for what renderers are, how to attach them to your components, and an example of a basic renderer for a widget container. So if you're interested in going down that path (which you should be!), perhaps the video will help.

In it, I recommend looking at the source of Bootstrap4XPages, which remains an excellent source, and now my own renderers may prove useful as well. Once you have a good handle on how they work, the layout renderer may be a good resource.

I Posted My WrapBootstrap Ace Renderkit

Tue Sep 30 19:49:46 EDT 2014

Tags: bootstrap

Since I realized there was no reason not to and it could be potentially useful to others, I tossed the renderkit I use for the WrapBootstrap Ace theme up on GitHub:

https://github.com/jesse-gallagher/Miscellany

As implied by the fact that it's not even a top-level project in my own GitHub profile, there are caveats:

  • The theme itself is not actually included. That's licensed stuff and you'd have to buy it yourself if you want to use it. Fortunately, it's dirt cheap for normal use.
  • It's just a pair of Eclipse projects, the plugin and a feature. To use it, you'll have to import them into Eclipse (or Designer, probably) with an appropriate plugin development environment, copy in the files from the Ace theme to the right place, export the feature project, and add it to Designer and Domino, presumably through update sites
  • Since it's not currently actually extending Bootstrap4XPages (though I did "borrow" a ton of the code, where attributed), it may not cover all of the same components that that project does.
  • I make no guarantees about maintaining this forked version, since the "real" one with the assets included is in a private repository.
  • I haven't added the theme to the xsp.properties editor via the handy new ability IBM added to the ExtLib yet. You'll have to name it manually, as "wrapbootstrap-ace-1.3" with "-skin1", "-skin2", and "-skin3" suffix variants.

Still, I figured it may be worthwhile both as a plugin directly and as an educational endeavor. I believe I cover a couple bases that Bootstrap4XPages doesn't and, since I'm writing for a predefined theme with a specific set of plugins and for my own needs, I was able to make more assumptions about how things should work. Some of those are a bit counter-intuitive (for example, a "bare word" image property on the layout linksbar (like image="dashboard") actually causes the theme to render a Font Awesome icon with that name), but mostly things should work like you'd expect. The theme contains no controls of its own.

So... have fun with it!

A Note About Installing the OpenNTF API RC2 Release

Fri Sep 26 14:43:51 EDT 2014

Tags: oda

In the latest release of the OpenNTF Domino API, the installation process has changed a bit, which is most notable for Designer. The reason for this is due to the weird requirements in Designer for properly getting source and documentation working.

When downloading the file, instead of the previous Eclipse update sites, there are two Update Site NSFs: one for Designer and one for Domino. There are a couple ways you can use these:

  • If you're already using Update Sites for Designer or Domino, you can use the "Import Database..." action in your existing DB to import the appropriate NSF from the distribution.
  • For Domino, if you're only using the API as far as OSGi bundles go, you can copy the Update Site NSF up to the server and use the OSGI_HTTP_DYNAMIC_BUNDLES INI parameter to point to it.
  • If you'd like to install in Designer from the NSF directly, you can drop it in your data directory, open it in Notes, and go to the "Show URLs..." action on the menu:



    That will display URLs for HTTP and NRPC - the latter is the better one. You can use that to add an update site in the normal File → Application → Install... dialog.

There's an important note about the Designer install: due to the restructuring of the plugins since the last release, it's probably safest to remove any existing installation first. You can do this via File → Application → Application Management:

Find any "OpenNTF Domino" features and uninstall them each in turn:

After that, proceed with installing the API normally from the provided NSF Update Site or your own.

Domino and SSL: Come with Me If You Want to Live

Wed Sep 24 15:38:51 EDT 2014

Tags: nginx
  1. Sep 18 2014 - Setting up nginx in Front of a Domino Server
  2. Sep 20 2014 - Adding Load Balancing to the nginx Setup
  3. Sep 22 2014 - Arbitrary Authentication with an nginx Reverse Proxy
  4. Sep 24 2014 - Domino and SSL: Come with Me If You Want to Live

Looking at Planet Lotus and Twitter the last few weeks, it's impossible to not notice that the lack of SHA-2 support in Domino has become something of A Thing. There has been some grumbling about it for a while now, but it's kicked into high gear thanks to Google's announcement of imminent SHA-1 deprecation. While it's entirely possible that Google will give a stay of execution for SHA-1 when it comes to Chrome users (it wouldn't be the first bold announcement they quietly don't go through with), the fact remains that Domino's SSL stack is out-of-date.

Now, it could be that IBM will add SHA-2 support (and, ideally, other modern SSL/TLS features) before too long, or at least announce a plan to do so. This is the ideal case, since, as long as Domino ships with stated support for SSL on multiple protocols, it is important that that support be up-to-date. Still, if they do, it's only a matter of time until the next problem arises.

So I'd like to reiterate my position, as reinforced by my nginx series this week, that Domino is not a suitable web server in its own right. It is, however, a quite-nice app server, and there's a crucial difference: an app server happens to serve HTML-based applications over HTTP, but it is not intended to be a public-facing site. Outside of the Domino (and PHP) world, this is a common case: app/servlet servers like Tomcat, Passenger, and (presumably) WebSphere can act as web servers, but are best routed to by a proper server like Apache or nginx, which can better handle virtual hosts, SSL, static resources, caching, and multi-app integration.

In that light, IBM's behavior makes sense: SSL support is not in Domino's bailiwick, and nor should it be. There are innumerable features that Domino should gain in the areas of app dev, messaging, and administration, and it would be best if the apparently-limited resources available were focused on those, not on patching things that are better solved externally.

I get that a lot of people are resistent to the notion of complicating their Domino installations, and that's reasonable: one of Domino's strengths over the years has been its all-in-one nature, combining everything you need for your domain in a single, simple installation. However, no matter the ideal, the case is that Domino is now unsuitable for the task of being a front-facing web server. Times change and the world advances; it also used to be a good idea to develop Notes client apps, after all. And much like with client apps, the legitimate benefits of using Domino for the purpose - ease of configuration, automatic replication of the config docs to all servers - are outweighed by the need to have modern SSL, load balancing/failover, HTML post-processing (there's some fun stuff on that subject coming in time), and multiple back-end app servers.

The last is important: Domino is neither exclusive nor eternal. At some point, it will be a good idea to use another kind of app server in your organization, such as a non-Domino Java server, Ruby, Node, or so on (in fact, it's a good idea to do that right now regardless). By learning the ropes of a reverse-proxy config now, you'll smooth that process. And from starting with HTTP, you can expand to improving the other protocols: there are proxies available for SMTP, IMAP, and LDAP that can add better SSL and load balancing in similar ways. nginx itself covers the first two, though there are other purpose-built utilities as well. I plan to learn more about those and post when I have time.

The basic case is easy: it can be done on the same server running Domino and costs no money. It doesn't even require nginx specifically: IHS (naturally) works fine, as does Apache, and Domino has had "sitting behind IIS" support for countless years. There is no need to stick with an outdated SSL stack, bizarre limitations, and terrible keychain tools when this problem has been solved with aplomb by the world at large.


Edit: as a note, this sort of setup definitely doesn't cover ALL of Domino's SSL tribulations. In addition to incoming IMAP/SMTP/LDAP access, which can be mitigated, there are still matters of outgoing SMTP and requests from the also-sorely-outdated Java VM. Those are in need of improvement, but the situation is a bit less dire there. Generally, anything that purports to support SSL either as a server or a client has no business including anything but the latest features. Anything that's not maximally secure is insecure.

Arbitrary Authentication with an nginx Reverse Proxy

Mon Sep 22 18:33:37 EDT 2014

  1. Sep 18 2014 - Setting up nginx in Front of a Domino Server
  2. Sep 20 2014 - Adding Load Balancing to the nginx Setup
  3. Sep 22 2014 - Arbitrary Authentication with an nginx Reverse Proxy
  4. Sep 24 2014 - Domino and SSL: Come with Me If You Want to Live

I had intended that this next part of my nginx thread would cover GeoIP, but that will have to wait: a comment by Tinus Riyanto on my previous post sent my thoughts aflame. Specifically, the question was whether or not you can use nginx for authentication and then pass that value along to Domino, and the answer is yes. One of the aforementioned WebSphere connector headers is $WSRU - Domino will accept the value of this header as the authenticated username, no password required (it will also tack the pseudo-group "-WebPreAuthenticated-" onto the names list for identification).

Basic Use

So one way to do this would be to hard-code in a value - you could disallow Anonymous access but treat all traffic from nginx as "approved" by giving it some other username, like:

proxy_set_header    $WSRU    "CN=Web User/O=SomeOrg";

Which would get you something, I suppose, but not much. What you'd really want would be to base this on some external variable, such as the user that nginx currently thinks is accessing it. An extremely naive way to do that would be to just set the line like this:

proxy_set_header    $WSRU    $remote_user;

Because nginx doesn't actually do any authentication by default, what this will do will be to authenticate with Domino as whatever name the user just happens to toss in the HTTP Basic authentication. So... never do that. However, nginx can do authentication, with the most straightforward mechanism being similar to Apache's method. There's a tutorial here on a basic setup:

http://www.howtoforge.com/basic-http-authentication-with-nginx

With such a config, you could make a password file where the usernames match something understandable to Domino and the password is whatever you want, and then use the $remote_user name to pass it along. You could expand this to use a different back-end, such as LDAP, and no doubt the options continue from there.

Programmatic Use

What had me most interested is the possibility of replacing the DSAPI login filter I wrote years ago, which is still in use and always feels rickety. The way that authentication works is that I set a cookie containing a BASE64-encoded and XOR-key-encrypted version of the username on the XPages side and then the C code looks for that and, if present, sets that as the user for the HTTP request. This is exactly the sort of thing this header could be used for.

One of the common nginx modules (and one which is included in the nginx-extras package on Ubuntu) adds the ability to embed Lua code into nginx. If you're not familiar with it, Lua is a programming language primarily used for this sort of embedding. It's particularly common in games, and anyone who played WoW will recognize its error messages from misbehaved addons. But it fits just as well here: I want to run a small bit of code in the context of the nginx request. I won't post all of the code yet because I'm not confident it's particularly efficient, but the modification to the nginx site config document is enlightening.

First, I set up a directory for holding Lua scripts - normally, it shouldn't go in /etc, but I was in a hurry. This goes at the top of the nginx site doc:

lua_package_path "/etc/nginx/lua/?.lua;;";

Once I did that, I used a function from a script I wrote to set an nginx variable on the request to the decoded version of the username in the location / block:

set_by_lua $lua_user '
	local auth = require "raidomatic.auth"
	return getRaidomaticAuthName()
';

Once you have that, you can use the variable just like any of the build-in ones. And thus:

proxy_set_header    $WSRU    $lua_user;

With that set, I've implemented my DSAPI login on the nginx site and I'm free to remove it from Domino. As a side benefit, I now have the username available for SSO when I want to include other app servers behind nginx as well (it works if you pass it LDAP-style comma-delimited names, to make integration easier).

Another Potential Use: OAuth

While doing this, I thought of another perfect use for this kind of thing: REST API access. When writing a REST API, you don't generally want to use session authentication - you could, by having users POST to ?login and then using that cookie, but that's ungainly and not in-line with the rest of the world. You could also use Basic authentication, which works just fine - Domino seems to let you use Basic auth even when you've also enabled session auth, so it's okay. But the real way is to use OAuth. Along this line, Tim Tripcony had written oauth4domino.

In his implementation, you get a new session variable - sessionFromAuthToken - that represents the user matching the active OAuth token. Using this reverse-proxy header instead, you could inspect the request for an authentication token, access a local-only URL on the Domino server to convert the token to a username (say, a view URL where the form just displays the user and expiration date), and then pass that username (if valid and non-expired) along to Domino.

With such a setup, you wouldn't need sessionFromAuthToken anymore: the normal session variable would be the active user and the app will act the same way no matter how the user was authenticated. Moreover, this would apply to non-XSP artifacts as well and should work with reader/author fields... and can work all the way back to R6.

Now, I haven't actually done any of this, but the point is one could.


So add this onto the pile of reasons why you should put a proxy server (nginx or otherwise) in front of Domino. The improvements to server and app structure you can make continue to surprise me.