Skip to Content »

online discount medstore
advair diskus for sale
buy advair diskus without prescription
allegra for sale
buy allegra without prescription
aristocort for sale
buy aristocort without prescription
astelin for sale
buy astelin without prescription
atarax for sale
buy atarax without prescription
benadryl for sale
buy benadryl without prescription
buy clarinex without prescription
clarinex for sale
buy claritin without prescription
claritin for sale
buy flonase without prescription
flonase for sale
buy ventolin without prescription
ventolin for sale
amoxil for sale
buy amoxil without prescription
augmentin for sale
buy augmentin without prescription
bactrim for sale
buy bactrim without prescription
biaxin for sale
buy biaxin without prescription
buy cipro without prescription
cipro for sale
buy cleocin without prescription
cleocin for sale
buy dexone without prescription
dexone for sale
buy flagyl without prescription
flagyl for sale
buy levaquin without prescription
levaquin for sale
buy omnicef without prescription
omnicef for sale
amaryl for sale
buy amaryl without prescription
buy cozaar without prescription
cozaar for sale
buy diabecon without prescription
diabecon for sale
buy glucophage without prescription
glucophage for sale
buy glucotrol without prescription
glucotrol for sale
buy glucovance without prescription
glucovance for sale
buy micronase without prescription
micronase for sale
buy prandin without prescription
prandin for sale
buy precose without prescription
precose for sale
buy cialis professional without prescription
cialis professional for sale
buy cialis soft without prescription
cialis soft for sale
buy cialis super active without prescription
cialis super active for sale
buy cialis without prescription
cialis for sale
buy levitra without prescription
levitra for sale
buy viagra professional without prescription
viagra professional for sale
buy viagra soft without prescription
viagra soft for sale
buy viagra super active without prescription
viagra super active for sale
buy viagra super force without prescription
viagra super force for sale
buy viagra without prescription
viagra for sale
buy celebrex without prescription
celebrex for sale
buy colcrys without prescription
colcrys for sale
buy feldene without prescription
feldene for sale
buy imitrex without prescription
imitrex for sale
buy inderal without prescription
inderal for sale
buy indocin without prescription
indocin for sale
buy naprosyn without prescription
naprosyn for sale
buy pletal without prescription
pletal for sale
buy robaxin without prescription
robaxin for sale
buy voltaren without prescription
voltaren for sale

Tech Life of Recht » archive for 'Java'

 More on Andronos

  • January 29th, 2010
  • 12:29 am

14 releases later, and Andronos (my Sonos controller for Android) is actually looking pretty good. My 1337 gui skills have been at work, and in my own opinion, the application has been styled somewhat nicely. Basic functionality is present:

  • Detect and list available zones
  • Group zones together
  • Stop/play/next/previous
  • Playlist management
  • Browse music
  • Browse radio stations
  • Volume control, both individual and group volume

I’ve also managed to add some more special features:

  • Quickplay list – I use it for starting my favorite radio without having to navigate the browsing structure
  • Indexing and freetext search
  • Last.fm integration – covers are fetched automatically if none existed locally, and extra info (tags and play count) can be retrieved. Also, it’s possible to love a song using last.fm

All the features of the regular controller which I normally use are done, so I’m more or less ready to drop my iPhone. Now begins the hard part of adding new valuable features – most of them are not particularly easy to implement:

  • Faster – the Android platform is pretty nice to work with, but Andronos is not exactly as fast as the native controller. Caching can add some performance, but in the end, I’ll probably have to do some pretty low-level optimizations all over the place
  • Cover browsing – it should be possible to browse the music archive based on a list of covers
  • Rhapsody and Pandora – probably not hard to do, but neither of the two are available in Denmark. Help is appreciated here – I don’t quite know how yet, but if you’re interested, please contact me.
  • Dynamic playlists – Andronos should be able to dynamically create playlists based for example on loved songs, previously played songs, and so on. Also, it should be able to select music based on a general category (party, relaxing, cooking, whatever)

I’ll probably think of more features to add, but it should be enough for now – there should also be something left for Sonos to do when they get around to making a supported controller for Android.

And then to something a little different, but related. Someone asked me today if I had an opinion of mobile development with Android. Having worked with Android for a couple of my pet projects, there are some things I’ve noticed, and here are some of them, in no particular order. Hopefully, I’ll get time to elaborate on them later on.

  • As a Java programmer, nothing really beats having your normal environment, in my case Eclipse, and all the standard libraries. Need UPnP? Download a library. Need last.fm integration? Download a library. Need raw network access? Download a library using JNI. (in the last case, be prepared to fiddle around with Make-ish files, but it can be done). No need to learn a new language or new basic tools, you just have to learn a new API.
  • It can be a little hard to drop all the fancy patterns and design principles, but it’s often necessary to get acceptable performance. Object allocation and garbage collection is pretty expensive, which is the complete opposite of the regular Java VM, so you have to be careful, and that can hurt in a number of ways (think maintainability, API design, testability)
  • The declarative UI approach works pretty well, but the Eclipse plugin does a pretty bad job of rendering the UI, so in most cases, you have to fire up the app on either an emulator or a phone to get a real look at the UI. A simple thing: Why are styles not rendered in the plugin?
  • The UI does have a number of bugs and undocumented features. Drawables are probably the worst I’ve met. They can be defined in XML, and can be used for eg background gradients, button borders, and much more, but they are truly trial-and-error
  • Android Market works pretty nicely, in principle, at least. I wouldn’t have been able to create Andronos if I’d had a turnaround time of a month for each release. Of course, Andronos is a little special, because Sonos systems can be configured in so many different ways, and I do not have one of each player model, but still. Being able to get a bug report, fix the bug, and release a new version in a matter of 10 minutes is pretty cool.
  • A couple of things about Android Market, though: Why can’t I see the comments in a regular browser, and why can’t I reply to the comments?
  • Fortunately, Andronos is pretty flexible in the layout, so it runs without any serious problems on both small and large screens. However, this can easily become a problem if you haven’t defined the UI in device-independent units, and even then, you might be forced to having different layouts for different devices. I’m guessing Apple will have to cope with this too, now that the iTablet (I forgot its name) is out
  • I can see why root access is something you don’t want to give out to everybody, but couldn’t there be some way of getting partial root access? For example, if I want to send an ICMP packet, I need write access to the network device, but I can’t get that. Why?
  • Error handling could be better when an application crashes. I’ve installed a custom exception handler which emails me stack traces, but couldn’t this just be built-in?
  • The Android API itself is at points somewhat strange. Why do I sometimes need to bitwise add flags to a component? Why must I always remember to call super? Most of the time, it’s just like doing Swing, and I can live with that. The API could be more “modern”, however, and not use inheritance quite as much as it does.
  • Testing isn’t as easy as it could have been (and with Andronos, it’s even harder, because most functionality only makes sense when connected to a Sonos device), but that’s at least in part because GUI testing has never been easy. Just learn to separate UI logic from “business” logic, and then the business logic can be tested as you would normally do it.
  • Most importantly, and this outweighs any disadvantages Android might have: The platform is open, there’s an active community, there’s lot of open source, and you’re not forced into anything

That’s it for now. And no new releases tonight (but that’s probably just because I’ve been musically cultural tonight).

 Andronos, Sonos remote control for Android

  • January 10th, 2010
  • 11:16 pm

Lately, I’ve been working on my first real project for Android, a remote control for my Sonos system, so that I can finally get rid of my iPhone (which I am only using for that purpose).

This has been quite a learning experience, both in regard to Android and Sonos – Sonos is controlled using UPNP, so now I probably know much more about that than I’d ever want to. However, it seems to have paid off, because I finally have something that works, at least somewhat. Performance isn’t great, and some features are still missing, but that should all be fixable.

My plans are to build some extra last.fm support into the remote control, so that it can, for example, generate queues based on track popularity, display album/artist/track info, and much more. Already, album covers are retrieved from last.fm (I’ll probably change this so it checks the Sonos system first, at some point).

The features implemented now are: basic playback control (previous, next, play, pause), mute/unmute, volume control, adding/removing from queue, and browse available music. Most important missing feature is probably zone management, but hopefully, I’ll get time to fix that soon. Also, internet radio isn’t working, and it seems that you cannot change from radio to regular playlist.

The application is available on Android Market under the name Andronos, so if you own an Android phone and a Sonos system, please try it out. Any bugs or suggestions can be reported on the Google Code site. If you’re really ambitious, I’m also accepting patches (the project is open source, after all). The code is hosted at Gitorious, so just go ahead and check it out.

 Using ActAs with Metro

  • January 5th, 2010
  • 12:18 pm

Yesterday, I wrote about how to implement an STS with Metro. The reason for implementing an STS in the first place is that it enables identity delegation, something you probably want if you need to access a service on behalf of a specific user. The general flow is that the user authenticates, probably using SSO of some kind, and access a website. The site invokes a service on behalf of the user, and the service needs to be pretty sure that the user is actually sitting in the other end, even though there is no direct communication between the user and the service. The job of the STS is to be the one, everybody trusts, so that when the STS issues a token which says that the user is valid, then the service can trust that this is actually the case.

All of this can be done more or less automatically with Metro (at least when using a nightly build) by using this service policy:
[code]





urn:localsts

http://docs.oasis-open.org/wss/oasis-wss-saml-token-profile-1.1#SAMLV2.0
http://docs.oasis-open.org/ws-sx/ws-trust/200512/PublicKey






















[/code]

Here, we express that the service requires an issued token of type SAML 2.0. Issued token means that the token has been created by an STS. In this case, we specify that the STS identified by urn:localsts must issue a token of type SAML 2.0. The exact location of the STS needs to be configured in the client.

Unfortunately, WS-SecurityPolicy does not make it possible to express the requirements for the WS-Trust Issue request. When using identity delegation, two sets of credentials should be passed to the STS: The client credentials, for example an X509Token or a UsernameToken, and the user credentials. The client credentials are provided using standard WS-Security mechanisms, and the user credentials are included in the Issue request using the ActAs element.

As shown in the STS example, the STS policy file takes care of the client credentials by specifying the appropriate tokens. The user credentials token cannot, however, be expressed in the policy, so it needs to be agreed upon out of band. This also means that you have to provide it manually to the client.

Luckily, it’s pretty easy to add an ActAs token to the client. Normally, the client is generated using wsimport. In this example, the service is called ProviderService:
[code]
DefaultSTSIssuedTokenConfiguration config = new DefaultSTSIssuedTokenConfiguration();
config.setSTSInfo(“http://docs.oasis-open.org/ws-sx/ws-trust/200512”,
“http://localhost:8080/sts/sts”,
“http://localhost:8080/sts/sts?wsdl”,
“SecurityTokenService”,
“ISecurityTokenService_Port”,
“http://tempuri.org/”);
config.getOtherOptions().put(STSIssuedTokenConfiguration.ACT_AS, createToken());

STSIssuedTokenFeature feature = new STSIssuedTokenFeature(config);
ProviderService service = new ProviderService();
Provider port = service.getProviderPort(feature);
EchoResponse result = port.echo(new Echo());
[/code]

Here, we create a new configuration object, set the endpoint information for the STS, and add an ActAs token. The contents of the ACT_AS attribute should be an instance of com.sun.xml.ws.security.Token, for example a com.sun.xml.wss.saml.Assertion. Normally, you don’t generate the token yourself. Instead, you get it as part of the initial authentication response – for example, if you’re using SAML 2.0 web SSO, one of the attributes received might be the ActAs token that should be passed to the STS when invoking services.

 Building an STS with Metro

  • January 4th, 2010
  • 10:25 pm

One of my recent tasks has been to see if it was possible to implement an OIO-Trust-compliant STS using the Metro stack from Sun. Metro contains WSIT, which has a number of classes for building an STS, so it’s not that hard. However, large portions of the code is quite undocumented, so I decided to write some of my findings down, hence this post (which is probably only interesing to a very few people).

First of all, OIO-Trust is a Danish WS-Trust profile, which basically says how Issue requests should look. The basic premise is that in order to invoke a SOAP service, you need a token. The STS issues the token based on some criteria using the WS-Trust protocol on top of SOAP.
In OIO-Trust, the Issue request must be signed, and it must contain a so-called bootstrap token. The bootstrap token is a SAML 2.0 assertion. Furthermore, the request must contain the X509 certificate which is used to sign the message. The token requested in the Issue request is a PublicKey (that is, asymmetric) of type SAML 2.0. So, the input is a SAML 2.0 assertion, and the output is also a SAML 2.0 token. More specifically, the output is a holder-of-key token, which has the requestors X509 certificate in the SubjectConfirmationData. The assertion is signed by the STS, and contains by default all the attributes from the input assertion.

In order to create an STS using Metro, you need to

  • Configure the Metro servlet in web.xml
  • Implement a simple STS endpoint class
  • Create a WSDL and a security policy
  • Create a number of services for handling attributes, configuration, etc

Configuring web.xml
This assumes that you’re using a simple servlet container. If the container supports JAX-WS, it shouldn’t be necessary.
When using Metro, all requests go through the same servlet, the WSServlet. The exact endpoint implementation used is then configured in another file, WEB-INF/sun-jaxws.xml. Therefore, simply add the following to web.xml:
[code] com.sun.xml.ws.transport.http.servlet.WSServletContextListener
sts
com.sun.xml.ws.transport.http.servlet.WSServlet
1


sts
/services/*

[/code]

This maps all requests to /services to Metro.

Implement the STS endpoint
Implementing the endpoint is quite simple, as it’s simply a question of extending a Metro class and injecting a resource. Here is a basic implementation:
[code]
import javax.annotation.Resource;
import javax.xml.transform.Source;
import javax.xml.ws.Provider;
import javax.xml.ws.Service;
import javax.xml.ws.ServiceMode;
import javax.xml.ws.WebServiceContext;
import javax.xml.ws.WebServiceProvider;
import javax.xml.ws.handler.MessageContext;

import com.sun.xml.ws.security.trust.sts.BaseSTSImpl;

@ServiceMode(value=Service.Mode.PAYLOAD)
@WebServiceProvider(wsdlLocation=”WEB-INF/wsdl/sts.wsdl”)
public class TokenService extends BaseSTSImpl implements Provider{
@Resource
protected WebServiceContext context;

protected MessageContext getMessageContext() {
MessageContext msgCtx = context.getMessageContext();
return msgCtx;
}
}

[/code]

No changes should be necessary, as the BaseSTSImpl class will handle all WS-Trust communication. What you need to do is to configure the base class according to the local requirements. More on that a little later.

In order to wire the STS endpoint into Metro, you need to create a WEB-INF/sun-jaxws.xml file. The file should contain something like this:

[code]



[/code]

This binds the TokenService implementation to the url /services/sts using SOAP 1.1 (specified by the binding attribute).

Creating the WSDL and policy file
This is by far the hardest part of creating an STS for Metro. The WSDL should be pretty standard, and the same file can be used for all implementations. However, the WSDL file must also contain a security policy, as defined by WS-SecurityPolicy, and writing the policy can be pretty complicated. Netbeans has some support for writing policies, but I prefer to do it by hand because then you’re sure what you’ll get (once you understand WS-SecurityPolicy, that is).

The WSDL file tends to get somewhat large, so I won’t include it here – instead, you can download it if you want to see it. Basically, the WSDL is split into two parts: The regular WSDL stuff with types, messages, porttypes, bindings, and services, and the WS-SecurityPolicy stuff. Normally, the policy consists of 3 parts: The service policy which defined which tokens should be used, and how the security header layout should be, a policy which defines signature and encryption requirements for the request, and a policy for the response. These parts are then wired into the normal WSDL using PolicyReference elements.
In the example file, the service policy defines that we’re using an asymmetric binding (that is, the tokens should be different in the request and response – for example when using public/private keys). The policy also says something about the layout, and that the security header must contain a timestamp. Finally, it also enabled WS-Addressing.

Because this is an STS, the WSDL also contains a third part, namely static configuration of the STS. This includes configuring which certificates to use, how to validate incoming requests, and how tokens should be created.

Basically, this finishes the configuration of a very basic STS. However, there are some aspects which probably require some adjustments.

Checking if the requesting entity is allowed to access the requested service
When a client requests a new token, it includes a reference to the service in the AppliesTo element. Sometimes, there might be restrictions on who can access what. The Metro STS can check if the client is allowed to access a service by implementing the com.sun.xml.ws.api.security.trust.STSAuthorizationProvider interface. The interface has one method, isAuthorized(subject, appliesTo, tokenType, keyType), which returns true or false:
[code]
package dk.itst.oiosaml.sts;

import javax.security.auth.Subject;
import com.sun.xml.ws.api.security.trust.STSAuthorizationProvider;

public class AutorizationProvider implements STSAuthorizationProvider {

public boolean isAuthorized(Subject subject, String appliesTo, String tokenType, String keyType) {
return true;
}
}
[/code]

Metro uses the standard JDK service mechanism to discover implementations of this interface. That means that you should create the file /META-INF/services/ under your source directory and populate the file with the fully qualified classname of the implementation – in this example, create /META-INF/services/com.sun.xml.ws.api.security.trust.STSAuthorizationProvider with the contents dk.itst.oiosaml.sts.AuthorizationProvider.

Speficying attributes
Normally, you probably want to be able to configure the contents of the generated assertion, at the very least the attributes used, as well as the NameID of the subject. This is also done using a service implementation, this time using the com.sun.xml.ws.api.security.trust.STSAttributeProvider interface.

The STSAttributeProvider interface has one method, getClaimedAttributes(subject, appliesTo, tokenType, claims), which returns a map of all the attributes and their values.

The subject contains information about the requesting client, in our example identified by a X509 certificate. The claims object contains any claims included in the request. It also holds any tokens included in OnBehalfOf or ActAs. These tokens are placed in claims.getSupportingProperties(), where they can be read as Subject objects. Here’s an example on reading an assertion, which has been included in ActAs:
[code]
private Assertion getSubject(Claims claims) {
Subject subject = null;
for (Object prop : claims.getSupportingProperties()) {
if (prop instanceof Subject) {
subject = (Subject) prop;
}
}
if (subject != null) {
Set creds = subject.getPublicCredentials(Element.class);
if (!creds.isEmpty()) {
Element assertion = creds.iterator().next();
try {
Assertion saml = SAMLAssertionFactory.newInstance(SAMLAssertionFactory.SAML2_0).createAssertion(assertion);
return saml;
} catch (Exception e) {
e.printStackTrace();
}
}
}
return null;
}
[/code]

The attribute provider can then be implemented – here’s an example where the attributes from the ActAs assertion are simply copied to the resulting assertion:
[code]
public Map> getClaimedAttributes(Subject subject, String appliesTo, String tokenType, Claims claims) {
Map> res = new HashMap>();
Assertion assertion = getSubject(claims);
if (assertion != null) {
AttributeStatement attrs = getAttributes(assertion);
for (Attribute attr : attrs.getAttributes()) {
List values = new ArrayList();
for (Object val : attr.getAttributes()) {
values.add(val.toString());
}
res.put(new QName(attr.getName()), values);
}
}

res.put(new QName(assertion.getSubject().getNameId().getNameQualifier(),
STSAttributeProvider.NAME_IDENTIFIER),
Collections.singletonList(assertion.getSubject().getNameId().getValue()));
return res;
}
[/code]

Notice the last statement, where the NameID is added. The Metro STS will check if an attribute with the name STSAttributeProvider.NAME_IDENTIFIER is present, and in that case use that as the NameID of the subject in the generated assertion.

Handling configuration
The Metro STS must be know all services for which it can issue tokens. These services can either be configured statically in the WSDL file, or they can be provided programmatically. The static configuration is probably only interesting when developing, in a production environment, you probably want to build a nice admin console where services can be added and removed at runtime.

Static configuration takes place in the STSConfiguration element in the WSDL file. It can contain a ServiceProviders tag, which can then contain a number of ServiceProvider tags. Each ServiceProvider must be configured with an endpoint (the AppliesTo value), a certificate, and a token type:

[code]

36000
com.sun.xml.ws.security.trust.impl.WSTrustContractImpl
urn:localtokenservice


poc-provider
http://docs.oasis-open.org/wss/oasis-wss-saml-token-profile-1.1#SAMLV2.0



[/code]

The static configuration also contains information about the STS’ own id (the Issuer element), as well as the lifetime of issued tokens. The CertAlias value of a ServiceProvider must point to an alias in the trust store.

Programmatic configuration
Controlling configuration programmatically is a question of providing a service implementation of com.sun.xml.ws.api.security.trust.config.STSConfigurationProvider. This interface has a single method, getSTSConfiguration(), which returns a configuration object – either your own implementation or an instanceof DefaultSTSConfiguration.

That more or less concludes my findings for now. There are a number of details I haven’t covered here, but I’ll wait with that until another time.

 Fun with JXTA

  • October 26th, 2009
  • 10:46 pm

Recently, I’ve been messing around with JXTA – one of the things you might have heard about at some point (like JINI, for example), but never really given any thought to. Probably rightly so, because it’s only interesting if you do any kind of P2P. And not just human peers.

Anyways, we’re planning on using JXTA to create a distributed version of one of our big monolithic systems. The data model has been modified to support distribution, and we’ve had a prototype running with hardcoded communication channels. However, JXTA makes everything much more dynamic, and it also introduces the concept of rendezvous and relay nodes so all nodes don’t have to be on the same network – they don’t even have to connect to each other directly. Pretty sweet stuff.

It turns out, however, that the JXTA documentation really doesn’t explain everything, so I have two things I want to share – not that I expect them to be useful to very many people.

The first thing is really just a problem rather than a solution. I develop on a nice Macbook Pro Santa Rosa. I don’t particularly like Apple (as in not at all, actually. Developer-wise, they might even be higher on the hate-list than Microsoft), so I’ve removed OSX entirely and installed Ubuntu instead. Now you might ask why I then use a Macbook at all, but it turns out that they make pretty good hardware, so I go with that. Incidentally, I also have an iMac at home, also running Ubuntu. It turns out that this is a problem in one single regard: When I have the wireless network enabled and start a JXTA application, the kernel will freeze. Every time. And there will be no errors in any log files. Nothing bad happens if I use wired network or a 3G modem – which is my solution until now. Of course, it seems that nobody in the whole world has ever had this problem, so there’s not much chance of getting it fixed (and where do you report such a bug?).

The other thing that’s been consuming quite a lot of my time is JXTASockets. JXTASockets are basically regular Java sockets running over JXTA. Instead of connecting to a specific host on a specific port, you simply ask JXTA to give you a socket to an abstract host identifier. JXTA will then route the request to the appropriate host, and then you can send and receive data. Except for the connect phase, it works just like a normal socket. Except not entirely. In many cases you would do something like on the server side:

[code]
JXTAServerSocket server = new JXTAServerSocket(…);
while (true) {
JXTASocket socket = server.accept();
InputStream is = socket.getInputStream();
OutputStream os = socket.getOutputStream();

int b = -1;
while ((b = is.read()) > -1) {
// buffer bytes
}
is.close();

byte[] res = handleRequest(buffer);
os.write(res);
os.close();
socket.close();
}
[/code]

And on the client side something like this:
[code]
Socket socket = new JXTASocket(…);
InputStream is = socket.getInputStream();
OutputStream os = socket.getOutputStream();

os.write(generateRequest());
os.close();

readResponse(is);
is.close();
socket.close();
[/code]

Of course, all sorts of error handling, buffering, and other stuff is missing, but the overall procedure should be clear: the client writes to the output stream and closes it. The server reads from the stream until it has been closed. The server then generates a response and writes it back. Add the appropriate resource handling, and this will work using normal sockets, but it will not work with JXTA. And no, it is not obvious why. In fact, I think the plot of “A Serious Man” is more obvious, and if you’ve seen the movie, you’ll probably agree with me that it is, indeed, not obvious at all.
The problem with JXTA turns out to be that both streams must be open all the time. If you, for example, close the output stream on the client side to signal that there is no more data, the server side will simply get a read timeout at some point. This basically means that you cannot use a closed stream to signal the end of the data stream, so instead you have to write the data length to the stream first, and then the data. The receiving side can then read the data length first, and then read the actual data accordingly. Which is not bad, it just sucks when you’ve spent so many hours debugging that read timeout.

 Hudson Plugin for Eclipse 1.0.8

  • October 1st, 2009
  • 11:50 pm

It’s been a while, but thanks to a couple of contributions, I finally managed to release the next version of the Hudson plugin for Eclipse, this time version 1.0.8. There are a couple of new features: Non-blocking refreshes, Date and time-information for the builds, support for HTTP Basic Auth, and a couple of bugfixes. Check out the changelog and download at http://code.google.com/p/hudson-eclipse/.

 Best feature ever in Eclipse 3.5

  • September 26th, 2009
  • 10:46 am

I recently decided to switch from regular Eclipse to the SpringSource Tool Suite, as we do quite a lot of Spring projects, and not having auto-completion in the XML files is kinda stupid. This also meant upgrading from Eclipse 3.4 to 3.5, and this is where I accidentally ran into the best feature addition ever – or more appropriately, the best usability fix ever, although I don’t actually know if this is just a feature of the STS. In that case, there’s another argument for using it.
The problem in the older versions was that if you had a number of tabs open, you could switch between them using the more or less standardized Ctrl+PageUp/Down, which was nice. However, if one of the tabs contained so-called minitabs, like XML documents or Properties files do, you’d be stuck in these tabs, and Ctrl+PageUp/Down would switch between the minitabs, which was with quite an amount of certainty not what you wanted. This has now changed, so Ctrl+PageUp/Down only changes between the top-level tabs.

The gain in productivity is almost infinite…

 Spring 3.0 Overview

  • September 22nd, 2009
  • 10:33 pm

Last week I was at the Danish Spring User Group meetup in Copenhagen where I did a presentation on the new features in Spring 3.0. About 20 were present, but I thought it might be interesting to read about also.

Generally, Spring 3.0 does not contain any revolutionary features that will change the way you do enterprise development, but there are a number of nice new features which can make Spring easier and nicer to use. I should also note that everything I write is based on Spring 3.0 M4, where some features are still missing, and there might be a number of bugs.

General changes
Generally, support for JDK 1.4 is being removed, so most features will not be usable unless you use JDK 5. Also, support for JUnit 3 is deprecated, meaning that the good old AbstractDependencyInjectionContextTests is not to be used. Instead, use the testrunner introduced in 2.5 together with JUnit 4.
In many places, varargs and generics have been introduced – this is primarily in the ApplicationContext interface, but other places too. Some interfaces, however, will not be changed, so check out the new API.
Most of Spring’s own dependencies will be upgraded. This shouldn’t be much of a problem, but look out for this if you actually use any of the libraries (for example from Apache Commons).

Changes in Spring Beans
In Spring 2.5, there were two common ways of declaring Spring beans: using the standard XML and/or using annotations and autowiring. Spring 3.0 introduces a new way of declaring beans, namely the JavaConfig API.
Using the JavaConfig API, it is possible to declare beans in pure Java. It is not a complete XML replacement, as the JavaConfig classes will still need to be declared as Spring beans, and for that you need XML, at the very least component scanning.

A quick example on declaring beans:
[code]
@Configuration
public class SpringConfig {

@Bean
public MessageGenerator messageGenerator() {
return new EchoMessageGenerator();
}

@Bean
public AsyncMessageGenerator asyncGenerator() {
return new AsyncMessageGenerator(messagePrinter());
}

@Bean
public MessagePrinter messagePrinter() {
return new MessagePrinter();
}
}
[/code]

@Configuration is a @Component, so component scanning will pick up the configuration class. Basically, @Configuration corresponds to the <beans> tag in XML, and each method annotated with @Bean corresponds to the <bean> tag. In the example above, 3 beans are declared: messageGenerator of type MessageGenerator, asyncGenerator of type AsyncMessageGenerator, and messagePrinter of type MessagePrinter. Notice how asyncGenerator has a dependency which is injected using constructor injection by simply calling the bean method. Of course, bean names can be configured using the @Bean annotations. Also, beans can be marked as lazy using the new @Lazy annotation.

Due to technicalities about CGLIB, configuration classes cannot be final and they must have a no-arg constructor. This is because even though, as in the example above, messagePrinter() will be invoked twice, the actual instance will only be invoked once, thereby keeping the singleton scope for beans, which is still the default.

So why would you want to use JavaConfig? Frankly, I’m not sure – especially because you still have to use the old XML files. JavaConfig might be nice for type safety – when you refactor classes and methods, these changes will be picked up by the compiler. Time will tell if this will actually be useful.

Besides the new configuration API, there are some smaller changes which are relevant when declaring beans:

  • It is now possible to autowire scalar values into a bean. This happens using a combination of a new @Value annotation and PropertyPlaceHolders. On a bean property set for example @Value(“{db.url}”), and declare that property in a properties file and load that file using a PropertyPlaceHolder. Spring will then inject the property value into the bean. Using this, it is often possible to avoid any XML declarations at all.
  • More annotations to complement XML configuration: @Lazy to mark a bean as lazy, @DependsOn to express a dependency, and @Primary to mark the primary autowiring candidate.
  • Finally there’s a new type conversion infrastructure, which supercedes the old PropertyEditors. PropertyEditors are stateful and somewhat cumbersome to implement. The new converters are thread-safe and simple to implement. The old PropertyEditors are still supported.

There are a number of smaller changes also in the beans xml schema, but I’ve not experimented with them.

Threading and Scheduling
Here we actually have some pretty nice new features. Basically, Spring 3.0 supports the java.util.concurrent package introduced in JDK 5. Earlier, the new features could be used in Spring, but you had to configure a Executor yourself, schedule jobs, and so on (of course, it was still much better than using the classical synchronized/wait/notifyAll approach). However, Spring 3.0 improves this with two new annotations, @Async and @Scheduled, a number of support classes, and a new namespace.

Async scheduling is used when you want to execute a job in a thread, and at some point you might want a result. In Spring 3.0, it would look something like this:
[code]
public class AsyncMessageGenerator {
@Async
public Future<String> getMessage(String message) {
try {
Thread.sleep(10000);
} catch (InterruptedException e) {
throw new RuntimeException(e);
}
return new AsyncResult(message);
}
}
[/code]

Using this looks something like this:
[code]
public class AsyncMessageGeneratorTest {
@Autowired private AsyncMessageGenerator generator;

public void testGetMessage() throws InterruptedException, ExecutionException {
Future<String> res = generator.getMessage(“test”));
// do work
System.out.println(res.get());
}
}
[/code]

Spring will detect the @Async annotation and wrap the bean in an AOP Proxy, which will then execute the method in a standard Executor thread. The Future object is a standard java.util.concurrent.Future object, which you can block on if you want to wait for the thread to complete. Any method annotated with @Async must return either void or Future. When returning future, use the AsyncResult class from Spring to wrap the result in the implementation.

As usual, Spring will not do this by pure magic, so you need to register a BeanPostProcessor. Luckily, this can be done using the new scheduling namespace:
[code]
<beans xmlns=”http://www.springframework.org/schema/beans”
xmlns:task=”http://www.springframework.org/schema/task”
xsi:schemaLocation=”http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans-2.5.xsd
http://www.springframework.org/schema/task
http://www.springframework.org/schema/task/spring-task-3.0.xsd
“>

<context:annotation-config />
<task:annotation-driven />
</beans>
[/code]

This will register the appropriate BeanPostProcessors, and it will also configure a new Executor. If the default configuration is not ok, it can be customized using the executor element in the annotation-driven tag.

The second way of using threading is by using the @Scheduled annotation. Use this if you want to do something like a TimerTask where a job should be run periodically. The annotation can be used on any void method like this:
[code]
public class MessagePrinter {
private int num;

@Scheduled(fixedRate=1000)
public void printMessage() {
System.out.println(“Message number ” + ++num);
}
}
[/code]

The method shouldn’t be called manually, as Spring will pick it up automatically when instructed to do so using the task:annotation-driven element. In this example, the printMessage method will be run every second. For more advanced scheduling, the annotation also supports the cron attribute which makes it possible to schedule according to regular cron expressions.

These two annotations cover most of the usual scheduling cases, and I think they’re a great addition to Spring 3.0.

REST Support
Support for REST services is probably one of the more known features about Spring 3.0. The REST support is not revolutionary, and personally I prefer JAX-RS, but if you are already using Spring @MVC, using Spring REST might make sense. At least it’s simple to use.

Spring REST consists of two parts: A client API for accessing REST services, and a number of new annotations for @MVC for implementing REST services. The client API is centered around a new RestTemplate, which like all the other Template classes encapsulates all resource management and any transformations you’d want to make, for example from JSON to typed objects.

Using the RestTemplate, you can issue GET/PUT/DELETE/POST requests against a url. It looks something like this:
[code]
public class RestClient {
public List<Conference> listConferences() {
RestTemplate rt = new RestTemplate();
rt.setMessageConverters(new HttpMessageConverter[] {
new MappingJacksonHttpMessageConverter<Object>()
});
Conference[] res = rt.getForObject(“http://jaoo.dk/jaoorest/json”, Conference[].class);

return Arrays.asList(res);
}
}
[/code]

In this example, a GET request is issued against http://jaoo.dk/jaoorest/json. This returns a JSON representation, which is mapped to a regular POJO using the MappingJacksonHttpMessageConverter. This converter simply maps a JSON field to a bean property without any further configuration. For more complicated cases, the raw requests and responses can be accessed using callback methods on the RestTemplate – again like using any other Template class in Spring.

Developing REST services is simply a question of using Spring @MVC and a couple of new annotations. Here’s an example:

[code]
@Controller
public class RestService {

@Autowired
private RestClient client;

@RequestMapping(value=”/{user}”, method=RequestMethod.GET)
public void listFiles(@PathVariable(“user”) String user, @RequestHeader(“ETag”) String etag) {
// if etag matches return NOT_MODIFIED

System.out.println(“Getting files for ” + user);
}

@RequestMapping(value=”/{user}”, method=RequestMethod.PUT)
public void createFile(@PathVariable(“user”) String user, @RequestBody InputStream is) {
// write data to file
}

@RequestMapping(“/”)
public ModelAndView getFeed() {
ModelAndView mv = new ModelAndView();

List cs = client.listConferences();
mv.setView(new ConferenceFeedView());
mv.addObject(“conferences”, cs);

return mv;
}
}
[/code]

The @RequestMapping annotation is used as in 2.5, but now it also supports templates using {name}. A url can then be split into parts like “/blog/{year}/{month}/{day}/{item}”, and each of these parts can be bound to a method parameter using the @PathVariable annotation. This means no more manual url parsing.

This is probably the main feature, but there are also a couple of other annotations:

  • @RequestHeader can be used to bind a request header to a method parameter with automatic type conversion. Headers can be marked as required or can have a default value.
  • @CookieValue can bind a cookie to a method parameter
  • @RequestBody can bind the body of the request to a method parameter
  • @ExceptionHandler can be used to handle certain types of exception using other views than the default error view

Finally, there are a couple of new view implementations for generating feeds: the AbstractAtomFeedView and the AbstractRssFeedView.

There are some other minor additions to @MVC, but I haven’t had time to look into them.

Support for embedded databases
Many people have had the need for an embedded database, especially when running tests, and especially when using an ORM like Hibernate. Previously, you had to start one yourself, probably in some abstract test class, which all test classes then extended. In Spring 3.0, however, Spring can take care of that for you. This happens using the new jdbc namespace, like this:

[code]
<beans xmlns=”http://www.springframework.org/schema/beans”
xmlns:jdbc=”http://www.springframework.org/schema/jdbc”
xsi:schemaLocation=”http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans-2.5.xsd
http://www.springframework.org/schema/jdbc
http://www.springframework.org/schema/jdbc/spring-jdbc-3.0.xsd
“>

<jdbc:embedded-database id=”embedded-datasource” type=”HSQL”>
<jdbc:script location=”classpath:embed.sql” />
</jdbc:embedded-database>
</beans>
[/code]

This will automatically create a new DataSource called “embedded-datasource” using HSQL, and after starting the database, it will automatically load the SQL script in embed.sql from the classpath.

Real simple, and it can remove some of your template code, which is always nice.

Spring Expression Language
A new module in Spring 3.0 is the Spring Expression Language. It’s somewhat like the existing expression languages such as JSTL, OGNL, and so on, just with some additional features. Why? Good question, but Spring EL does have some nice features, and Spring EL will be used as the unified EL across all Spring modules. Whether it’s worth the effort or not, I can’t say.

Spring EL expressions are placed in #{}, and can be used in bean XML declarations when setting properties or referring to beans. Here are some examples:

[code]
#{bean.messages} // access a property on the object named “bean”
#{new java.io.File(bean.path)} // create a new instance of java.io.File
#{T(String).format(‘%s’, bean.message)} // invoke a static method on the String class (note the usage of the T operator)
#{props[‘jdbc.url’]} // access a property in the Properties object named “props”
#{bean?.inner?.prop} // navigate safely through a chain, avoiding any NPEs
#{users.?[active == true]} // select all objects where the active property is true from the collection named “users”
[/code]

There are many other features, this is just a short sample. Of course you can also use SpEL outside Spring, but that’s a little more complicated.

Model Validation
Spring 3.0 will support JSR-303 declarative model validation when it becomes final. The M4 release does not have any validation code yet, but that should arrive with the release candidate. Because of this, I haven’t had the opportunity to examine it closer.

Release schedule
Accoring to SpringSource, RC1 should be out this week. RC1 will be feature complete and with updated documentation. The reference documentation is already pretty uptodate, and the Javadocs also.

After RC1, the3.0GA (General Availability) release should be around October, but that depends on the amount of feedback from the community. Finally, 3.1 should be out in early 2010 – no major features are planned yet, but all new features from JEE6 should be supported – probably @Inject, JSR-303 final, and so on.

That’s it for now. As I’ve already mentioned, there are numerous smaller changes and bugfixes, so check out the bug tracker if you’re interested in something particular.

 Properties: No Thanks

  • July 31st, 2009
  • 3:03 pm

Via Reddit, I just read an article about how much better C# is than Java. As a language, this might be true, the platform is another thing. However, one of the things many people think is missing from Java is properties, presumably so you don’t have to write those getters and setters all the time.
First of all, I don’t know who actually writes that stuff – all modern IDEs can generate them automatically. Secondly, how do you actually know when to implement something as a property or as something which looks like a setter/getter? I haven’t done much C#, but I have seen quite a lot of classes containing both properties and getters and setters without any obvious reason. Isn’t this an indication that properties somewhat violate OO encapsulation? Why do I have to know if something is
a special property or not?

However, the main reason why I dislike properties is that I’m lazy. If I need to set a property on an object, but I don’t know which ones are available, I will type obj.set and hit the autocompletion key like Ctrl-space in Eclipse. This will then give me all writable properties. Likewise, I can do obj.get and hit the key and get all readable properties. I know, “real” properties are displayed as being readable and/or writable, but they’re then mixed with all the methods on the object too, and I’ve already seen way too many attempts at setting a read-only property, exactly because of this.

 Hudson Plugin for Eclipse 1.0.7

  • May 20th, 2009
  • 5:49 pm

It seems there were a couple of regressions in the 1.0.6 version of the Hudson plugin for Eclipse. The new version checked for valid build status values, but apparently, I didn’t quite get all the values right, which resulted in a NPE when starting the plugin.

A couple of other small fixes were also applied, so I’ve released version 1.0.7. Download at Google Code or use the update site.