51 holes plugged in latest Java security update

51 vulnerabilities have been patched in the latest Java security update, the first of a new quarterly cycle. Until now, Java’s official patch release schedule has been three times a year – although the emergence of dangerous zero-day exploits have forced Oracle to issue two out-of-cycle emergency patches over the past twelve months.
Of the 51 Java vulnerabilities patched in this update, all but one are remotely exploitable without the need for a username and password, and 12 were given the maximum possible CVSS score of 10/10.
As with the majority of high-profile Java vulnerabilities, almost all target browser Java applets, and as such security advisors continue to recommend that users disable the browser plugin (or, if possible, remove it altogether). However, security firm Qualys note that two “highly critical” vulnerabilities of the 51 can also apply to server installation.
The new schedule is in line with Oracle’s quarterly Critical Patch Update (CPU) bulletin, which also covers the company’s other software. VirtualBox, MySQL Server and GlassFish are among the many other applications that have received security updates this week.
Last month, Trend Micro highlighted a new wave of attackers, who are taking advantage of weaknesses in Java’s native layer. Though difficult to pull off, it appears knowledge of such exploits has become widespread, with highly dangerous results – infiltration of the native layer allows for execution of arbitrary code.
On the Sophos Naked Security blog, researcher Chester Wisniewski praised the move to a more regular cycle, but said it still wasn’t regular enough – especially since Microsoft and Adobe provide monthly patches for their browser plugins.
“Put the award on the shelf in your lobby, sell the ten million dollar boat and hire the engineers needed to update the Java patch cycle to monthly with the spare cash,” concluded Wisniewski, referring to Oracle’s recent America’s Cup win. “3+ billion devices will thank you.”


Tutorial: Integrating with Apache Camel

Running the rule over the open source integration framework

Tutorial: Integrating with Apache Camel

Since its creation by the Apache community in 2007, the open source integration framework Apache Camel has become a developer favourite. It is recognised as a key technology to design SOA / Integration projects and address complex enterprise integration use cases. This article, the first part of a series, will reveal how the framework generates, from the Domain Specific Language, routes where exchanges take place, how they are processed according to the patterns chosen, and finally how integration occurs.

Introduction

From a general point of view, designing an integration architecture is not such an obvious task even if the technology and the frameworks you want to use are relatively easy to understand and implement. The difficulties lie in the volume of messages, transformations to apply, synchronicity or asynchronocity of exchanges, processes running sequentially or in parallel, and of course the monitoring of such projects running in multiple JVMs.
In traditional Java applications, we call methods from classes, while objects are passed and/or returned. A service (such as payment or billing, ...) is a collection of classes. Called methods are chained and objects transport information, sometimes enlisted within transactions but always deployed within the same Java Container (Web, JEE, Standalone). Unless we have to call external systems or integrate legacy applications, RDBMS etc, most of the calls are done locally and synchronously.
If a service wants to be reusable, it needs to be packaged, versioned in a library and communicate to the project which will use it. This approach is fine for projects maintained by in-house development teams where costs can be supported by IT departments but it suffers from different issues and requires us, most of the time, to use the same programming language or specific technology to interconnect process (RPC, IIOP, …), container where code is deployed.

Figure 1: SOA
To allow applications to be developed independently, without such constraints, the decoupling must be promoted between the issuer of a request/message from the service in charge to consume it. Such a new architecture paradigm is called Service Oriented Architecture and uses a transport layer to exchange information between systems. One of the immediate benefits of SOA is to promote a contract based approach to define the Services which are exposed between applications and manage them according to ‘governance rules’.
The SOA approach has been able to federate different teams, tackle problems surrounding the development of more complex projects. This IT transformation is required as companies need to be more agile to adapt to the market needs, information must be provided in real-time and business adaptions need to be supported by existing legacy and back systems.
While the SOA philosophy has been widely adopted, the learning curve to master XML, XDS Schemas, Web Services and Business Process Engine, the creation and management of transversal teams, the governance needed to manage services and the skills to acquired have certainly been factors in explaining why SOA still struggle to be adopted by corporate companies. Moreover, IT departments are not only concerned by promoting and managing the Web Services and Registry but also to interconnect, exchange, transform and validate information between disparate systems. This integration aspect of IT work has been completely “underestimated” when SOA principles have been elaborated.

Enterprise Integration Patterns

In 2005, Gregory Hope and Bobby Wolf have published a book called 'Enterprise Integration Patterns' where they not only spend their time to describe complex use cases, but they also define a vocabulary, grammar and design icons to express those complex integration patterns that IT departments have to address. This book has changed the way how development teams (Business/Functional analysts, Data Modelers and Developers) collaborate together to design Integration / SOA projects. The discussions were not only focused any more just on how Services, XML should be structured and business processes imagined, but also on how patterns should be used to solve integration use cases (aggregation, splitting, filtering, content based routing, dynamic routing). This book has leveraged actors towards a more agile programming approach. To support the EIP described in this book and help the developers to solution integration use cases, the Apache Camel Integration Java framework was created 5 years ago.

EIP design icons

Discover Apache Camel

Representing the EIP patterns for aggregated or routing which requires that we ‘express’ them using a language. This language is not a new programming language, moreover a language specific to a Domain, which describes problems adequately to the chosen domain (Integration). Apache Camel is a Java Integration Framework that supports such Domain Specific Language (aka. DSL; for further information, see the Camel documentation) using object-oriented language like Java, Scala, Groovy etc. No parser, compiler or interpreter is required but instead a list of commands, instructions which are sequenced:
instruction1().instruction2()....instructionN(); 
Apache Camel is also defined as a “Mediation and Routing” engine. Let’s think of the global road network: we can transport vehicles of different type and size, with passengers of different origin, color, age, sex, between cities and capitals. According to traffic conditions, the trip can be adapted and alternative roads used. Likewise Apache Camel transports ‘messages’ along Routes.
from("Brussels")
 .to("Paris"); // Transport passengers from Brussels Capital to Paris
Each Camel route starts with the from instruction which is particularly important as it acts as a consumer and plays a specific role depending on whether it will be triggered (‘event drived architecture’) or be able to read data at regular intervals (‘poll architecture’). The consumer is a factory and whenever data are received, then ‘Messages’ will be created and transported by the Apache Camel route.

Of course Apache Camel does not at all transport ‘passengers’ in a route but ‘messages’. These Messages will pass through a collection of steps, aka processors to transform, validate, format, enrich the content of the information received. The framework provides different processors which have been specialized (Bean, Log) to simplify the manipulations that we would like to apply, like the code below:
from("Brussels")
       .bean("Border","validPassport")
       .log("Passport has been controlled")
       .bean("Border","controlTicket")
       .to("log:travel://LogLevel=INFO" + "Ticket has been controlled")
       .to("Paris");
Each processor placed after the ‘from’ pass the information and “form” a chain like the wagons of a train, as below:
from("")
       ...
       .to("log:travel://LogLevel=INFO" + "Ticket has been controlled") //
       .to("file:///outputDirectoryWhereFileWillbeCreated") //
       .to("http://www.google.be?doASearch") // Call External HTTP Server
       .to("jms://queue:outputQueue; // Response received is published in a queue
Nevertheless, certain processors will produce a message that Camel will send to a Server (SMTP, FTP), Application (RDBMS), Broker (JMS), to another camel route (DIRECT, SEDA, VM) and in some cases will wait till they get a response (HTTP, TCP/IP, WS, REST,WebSocket).
One of the key benefits of Camel is that it offers the possibility to take decisions according to the information that it carry in using a Message structure. Such a Message which corresponds to an object Exchange contains the information or object carried in a Body, but also the metadata part of Headers.
The metadata allows you to document the objects transported but also to know from where they are coming from, their origin (File, Ftp, WebService, SMTP, JDBC, JPA, JMS, …) and where they should go. To support the decisions, Camel uses one of the EIP patterns Content Based Router, Filter, Aggregator, Splitter, … with a specific language called Expression Language (Simple, Constant, Xpath, Xquery, SQL, Bean, Header, Body, OGNL, Mvel, EL, ...).

Message structure
The decisions are taken by Predicates that we can compare to If/Then/Else, While/For statements. The routing engine will determine what to do with the “Message(s)” and where they should go.

The choice/when which is used by the Content Based Router will calculate (using the predicate and expression language) if the condition is met. If this is the case, then the exchange is moved to the processors defined in the path of the branch, otherwise they will move into another pipeline. All this is demonstrated below:
  // 
  from("Brussels")
       .bean("Border","validPassport")
       .choice()
            .when()
                .simple(${header.isValid}==true) // Simple language checks if  the status is equal to true
                   .log("Passenger has been controlled")
                   .log("We can now control their ticket")
                   .bean("Border","controlTicket")
                   .to("Paris")
            .otherwise()
                   .log("Your are not authorized to continue your trip");  
For some components used, a response is expected from the receiver called (HTTP, WebService, REST, JMS – Request/Reply, TCP/IP, …) or by the sender issuing the message. In this case, Camel will adapt the pattern used to internally transport the message. This pattern is normally of type InOnly but when a response is required, the pattern to be used will be InOut. To transport the information and to avoid that we mix an incoming Message with outgoing Message, Apache Camel will use two different objects for that purpose, which are in or out. When no response is required which is the case when we use by example a File component, then the out object is always null.


One step further

As the traffic is controlled by operators, Apache Camel provides an environment to manage routes (Start/Stop/Suspend/Resume the traffic in routes). This environment is called a Container or more precisely a CamelContext.

The container is not only the runtime where the routes are deployed and but also acts as complex ecosystem. It can trace the exchanges, how to manage using JMX information exposed by the framework, how to handle the thread’s pools, how the routes can be discovered, how we should shutdown the routes and generate unique identifiers that we use when an exchange is created.
The CamelContext will also register the components that we need to consume or produce that information. According to the scheme name contained in the URI, Apache Camel will scan the classes loaded by the classloader to find the components that it would like use:
"scheme://properties?key1=val2&key2=val3 //
"file:///home/user/integration? "
"timer://myTimer?delay=2s&period=10S"
The Component class is a factory which will create an Endpoint object based on the parameters of the collected from the URI (?key1=value1&key1=value2 ...). This object contains the methods required to create a Producer or Consumer according to the role played by the component.
Typically, the Polling Consumer regularly scans a directory of a file system, has a listener to a JMS reading JMS messages and will create an Exchange that it will propagate to the next processor as shown below:
@Override
protected int poll() throws Exception {
 Exchange exchange = endpoint.createExchange();
// create a message body
 Date now = new Date();
 exchange.getIn().setBody("Hello World! The time is " + now);
 try {
// send message to next processor in the route
 getProcessor().process(exchange);
 return 1; // number of messages polled
 } 
}
At the opposite end, the Producer will wait till it gets a Camel Exchange from a processor, then will manipulate the “Message”, enrich it and change the ‘Metadata’:
public void process(Exchange exchange) throws Exception {
// Add a new property
 exchange.getIn().setHeader("FrequentFlyer","true); 
}
A Camel project typically consists of a Java Main class where we will create a DefaultCamelContext, register the Camel Routes and start the container. As described in the following example, a RouteBuilder class is required, as is its Configure method to call the static methods (= instructions) to design a Camel Route (= collection of Processors). A Route Builder allows to create one to many Camel Routes.
  // 
  public class MainApp {
    public static void main(String[] args) throws Exception {
        // CamelContext = container where we will register the routes
        DefaultCamelContext camelContext = new DefaultCamelContext();
        // RouteBuilder = Where we design the Routes using here Java DSL
        RouteBuilder routeBuilder = new RouteBuilder() {
            @Override
            public void configure() throws Exception {
                from(„file:///travelers“)
                  .bean(“Flight”,”TransportPassenger”)
                  .to(„file:///authorizedTravelers“);
            }
        };
            // Add the routes to the container 
        camelContext.addRoutes(routeBuilder);

        // Start the container
        camelContext.start();
        // When work is done we shutdown it
        camelContext.stop(); 
Compared to other integration frameworks, Apache Camel is unique, as it is able to handle Java Objects and is able to automatically convert the object type to the one which is expected by the Processor or Predicate.

During the crea tion of the CamelContext, all the classes in charge of doing type conversion (File to String, Reader, String to DOM, …) will be loaded in an internal registry which is queried during exchange processing by the Camel Processors. These Converter classes come from the different jars, part of the java classpath. While such a process is done by default by Apache Camel, you can also use a specific instruction to tell which specific converter should be applied to the object type received. See below:
// 
 from("file:///travelers") // The File endpoint polls every 1s second files
 // available under "travelers" directory
 .convertBodyTo("String") // We convert Camel Generic File object to a String 
// which is required by Xpath expression language during Content Based Routing
 .choice()
 .when()
 .xpath("/traveler/@controlled" = 'true') // Check if condition is 
// matched using as expression left hand side part and condition, right hand side
 .log("Passenger has been controlled")
 .to("file:///authorizedTravelers")
 .otherwise()
 .log("Your are not authorized to continue your trip"); 

Next Time

During this first part of the Apache Camel article, we have introduced some of the basic functionalities of this Java Integration Framework, implementing the Enterprise Integration Patterns, which uses a Domain Specific Language to design route transporting Messages between systems/applications.
The DSL allows us to define instructions which are read sequentially. When information is received or consumed by a Camel Component, then an Exchange is created and moved to a collection of processors doing transformations on the content of the message linked to a Body object. The framework is able to take decisions (Content Based Router, Filter, …) using one of the Expression Languages supported (Xpath, Simple, SQL, Header, Body, Constant, EL, JavaScript, …) with a Predicate which allows us to define the condition. During the transportation of the Exchange, Camel will automatically convert object from and / or to a specific type using an internal registry containing Converting strategies. A Camel project typically uses a Container called a CamelContext to register the Camel Routes, Endpoints. In the second part of the series, we will cover more advanced features of Camel (Transformation of Complex Data Format, mutltithreading, asynchronous exchange, ...).

Exploring the future of the JVM at JavaOne

Exploring the future of the JVM at JavaOne

When Sun Microsystems was acquired by Oracle three years ago, to say that the Java Virtual Machine (JVM) ecosystem was overgrown was to put it mildly. As Java Platform Group JVM architect, Mikael Vidstedt found himself faced with the unenviable task of pruning down seven JVMs down into a more manageable entity.
He’s been coordinating Oracle’s technical vision for the JVM ever since, and after a year that’s been eventful for mostly all the wrong reasons, faced up to a JavaOne audience with plenty of questions for where the virtual machine can go next.
This time last year, Vidstedt was content with Oracle’s progress. As far as he was concerned, JVM convergence was a mission accomplished – with Java Flight Recorder and Mission Control incorporated, and Permgen purged, and everything else ticking over nicely. Then wave after wave of security issues came to light – most of which Oracle would rather not mention in their triumphant keynote speech, thank you very much.
Even though the issue wasn’t addressed by the big guns, and swiftly batted down in the Java media panel, Vidstedt readily acknowledged the many problems that Oracle has faced over the past year due to vulnerabilities in Java – pointing out that the sum of these problems was very much demonstrable by the fact that this year’s event has its own dedicated security track.  
With this in mind, security will apparently remain a key focus area for future JVM projects. But that’s not the only big issue. As Vidsedt noted, from now on, with cloud computing here for good, situations where many, many JVMs are running (almost) the same application will become the norm. The focus will be on how best to share resources across machines, and sound distribution management will be critical.
Understandably, lambdas are a hot topic at JavaOne this year, and Vidstedt was keen to emphasise their benefit to future JVM developments. Oracle has invested literally centuries of man hours grappling with the issue of how to make non-Java languages run efficiently on the JVM. With the invokedynamic instruction addition to Java seven, real progress has been, but there is still considerable work to do. Going forward, lambdas will be key pivots for a language blind JVM.
In terms of serviceability, Java Flight Recorder (re-released earlier this month) continues to be a work in progress. The most exciting development in this area is the addition of automated data analysis in Java Mission Control, drawing analysis from events, and coming to high level conclusions.

 

Spring creator Rod Johnson joins Hazelcast board

Spring creator Rod Johnson joins Hazelcast board

 Rod Johnson, creator of the Spring Framework, is to join Hazelcast, the company behind the in-memory data grid technology of the same name. The move is part of an investment round, which saw Johnson and Bain Capital Ventures together inject a total of $2.5 million into the startup.
Hazelcast was founded in Istanbul, Turkey in 2008 off the back of an open source project. The company has since opened offices in Palo Alto, and Bain Capital MD Salil Desphande claims that Hazelcast is “positioned to clean up in a market that Gartner says will be worth a billion dollars by 2016”.
The investment is particularly notable, however, due to the involvement of Johnson. The Australian developer and entrepreneur created the highly influential Spring Framework for Java in 2003, and cofounded SpringSource – later purchased by VMware – off the back of its success.
However, in July 2012 he announced his departure from VMware, and in effect Spring, to “pursue other interests”. He then set tongues wagging with a position on the board of Scala creators Typesafe in September (in which he continues to be “actively involved”).
Alongside Typesafe and now Hazelcast, Johnson has also invested in – and serves on boards of – the companies behind graph database Neo4j, real-time JavaScript platform Meteor and search engine Elastisearch. InfoQ report that news of a further two board positions will be announced “in the next few weeks”. [Corrected Sep 20]
It appears that, a year on from his departure, Johnson’s post-Spring plan isn’t just to help kill Java with Scala, but to spread his open-source business knowledge throughout the industry. And perhaps make some cash on the side while he’s at it.

Should Oracle be doing more for Java 6 users?

 It may not be Halloween for another month or so, but a grim blog post from security expert Christopher Budd will send a shiver down the spine of users with desktop Java still installed. As we reported last week, Java’s security issues have become even more complex this year, with a new raft of super-skilled hackers capable of targeting its native layer and exploit system vulnerabilities on an unprecedented level. Unfortunately for Oracle, the bad news has continued, with Budd delivering the grim prediction on September 10 that there’s every reason to believe that the worsened situation is “here to stay”, and likely to get even worse before it gets better.
In a doom laden post on Trend Micro, Budd identified the native layer exploits as emblematic of an increasing sophistication in attacks, and just one sign that things had changed for the worse. The coalescence of this issue with a new wave of  attacks targeting unpatched vulnerabilities in Java 6, a widely-deployed but, as of  February 2013, no-longer supported version of Java, has led the analyst to conclude that the overall ‘threat environment’ for Java has increased significantly.
More than 50% of Java users are still actively employing the program, in spite of the huge risks of having security support, creating an unprecedented situation for Oracle. Java 6 users are effectively now a sitting target, and Budd is in no doubt that new waves of attacks are inevitable as malware developers get busy reverse engineering Java 7 fixes to have their wicked way with the old, unsupported version.
Of course, the simple solution would be to just uninstall Java 6 and upgrade to Java 7 - but, as we’ve seen, that’s not a realistic scenario, or a feasible solution for every user, and whilst there is a premium option where users can pay for extra Java 6 support, that’s simply not a solution for everyone.
Information security consultant Michael Horowitz points out on his Java version testing site that there seems to be a communication failure between Java browser plug-ins and browsers, meaning that it can be difficult to find and catalogue all the versions of Java on a PC. The platform is so ubiquitous that it would be virtually impossible to completely eradicate vulnerable versions - and so means that the line of defence must shift from individual devices to the network as a whole. As Budd reflects, this gives a new and sinister connotation to Sun Microsystems’ marketing slogan “The Network is the computer.”
When support for Windows XP is withdrawn by Microsoft next spring, Budd frets that “a perfect storm of permanently vulnerable systems” will be created, leading him to hypothesise that summer 2014 could be a veritable spree for cyber criminals.
For those unable to jump ship from Java 6, the best they can do is try to mitigate the security issue. Since March, Red Hat has assumed leadership of the OpenJDK 6 community, and Apple has actively updated OS X to automatically disable Java if it hasn't been used for 35 days. Oracle is highly aware of the issue, and has been enforcing a Microsoft style ‘security push’, but perhaps they would be better served by re-examining their “End of Life” date policy and abandonment of non-premium customers policy, not only as a goodwill gesture towards the millions of users still dependent of Java 6, but to bolster the integrity of Java as a whole.

Developers now able to get their hands on Java 8 preview


 

 


Well, it’s near one deadline

Developers now able to get their hands on Java 8 preview

Oracle has now released the preview test build of Java SE 8, intended for broad testing by developers – although the much delayed general-release development kit will be percolating for at least another few months.

So what are the key features to look out for in this release? Well, of course there’s the much anticipated  Project Lambda (JSR 355) - cited in a blog post from Reinhold as one of the main reasons for pushing back the release earlier this year- which "adds lambda expressions, default methods, and method refer­ences to the Java programming language and extends the libraries to support parallelizable operations upon streamed data."yesterday, Oracle's Mark Reinhold, chief architect of the Java Platform Group, urged developers to test out the developer preview for JDK (Java Development Kit) 8.
Other noteworthy developments in JDK 8 include a new date and time API, compact profiles, and the Nashorn JavaScript engine. There are also some “anti-features”, such as the removal of the permanent generation from the HotSpot virtual machine.
This month was originally slated for the release of JDK 8, but due to the numerous security concerns that have dogged Java recently, Oracle wisely postponed availability until Q1 2014 - at the earliest. Though, based on the track record for this release, we’re hedging our bets.
Although Oracle has been putting its energies into resolving outstanding security glitches and restore confidence, recent incidents show that there’s still some considerable work to be done. With exploits like watering hole attacks and zero-days still fresh in the minds of many, Oracle won’t want to risk the final release becoming yet another excuse for Java baiting.
If you’re eager to get stuck into the release, you can download it here.

 

Apache Cassandra v2.0 unleashed

The little Big Data project that could 

Apache Cassandra v2.0 unleashed

The Apache Software Foundation announced the launch of a second version of open source, big data distributed database  Apache Cassandra this week. Used by mega sites like Netflix, CERN, Reddit, and Instagram, since its 2010 ‘graduation’, the highly-scalable database program has achieved huge adoption, growing bigger and more powerful all the time - and with heavy hitters like this backing your site, you need to be offering the goods.
The open source distributed database system is intended for storing and managing large amounts of data across commodity servers, and is designed to serve as both a real-time operational data store for online transactional applications and a read-intensive database for large-scale business intelligence (BI) systems. Additionally, the fully distributed architecture integrated within Apache Cassandra in theory gives high fault tolerance.
According to the Foundation’s press release, the new version of Apache Cassandra comes complete with new enhancements such as lightweight transactions, triggers, and CQL (Cassandra Query Language) enhancements.
What’s the spiel behind these new features?
  • Lightweight transactions that offers linearizable consistency.
  • Experimental Triggers Support.
  • Numerous enhancements to CQL as well as a new and better version of the native protocol.
  • Compaction improvements (including a hybrid strategy that combines leveled and size-tiered compaction).
  • A new faster Thrift Server implementation based on LMAX Disruptor.
  • Eager retries: avoids query timeout by sending data requests to other replicas if too much time passes on the original request.
You can download both the source and binary distributions of Cassandra 2.0.0 at: http://cassandra.apache.org/download/

 

The top ten coolest features in NetBeans IDE 7.4

The top ten coolest features in NetBeans IDE 7.4

NetBeans IDE 7.4 is all about letting you work with JDK 8 previews, enabling you to integrate HTML5 into Java EE applications, providing tools for developing mobile applications via Apache Cordova, and deploying applications to mobile devices.
Lots of documents and screencasts have been published around this release of NetBeans IDE, but let's wade through all the new features and quickly focus only on the ten that are most impressive.
 
1. JDK 8 Preview. Previews are available of the upcoming JDK 8, so you're able to register JDK 8 previews in the IDE, check for JDK 8 profile compliance, and refactor your code to change from anonymous inner classes to lambdas. 

2. HTML5 Integration with Java EE 7. Java EE developers wanting to benefit from the rich and flexible UIs made possible by HTML, JavaScript, and CSS, can now integrate these very easily via tools and wizards directly into their Maven or Ant based Java EE applications. Live web preview of web pages is available in the embedded WebKit browser, the Chrome browser with NetBeans plugin, as well as on mobile devices, i.e., Chrome on Android and Mobile Safari on iOS.

3. Apache Cordova. Creating applications for iOS and Android doesn't mean needing to know any native languages. Develop your applications in HTML, JavaScript, and CSS and then use the Cordova tools built into NetBeans IDE 7.4 to convert them to native packages, then deploy them directly to the native emulator or to the native device. When deployed to the native device, you can debug JavaScript and style CSS directly on the device.

4. Android and iOS. Once an application, whether it is created via Cordova or not, is ready to be tried out on a mobile device, deploy it quickly and easily from the IDE. New tools are included for deployment to Android and iOS emulators and native devices.

5. Knockout, AngularJS, and ExtJS. JavaScript support in the IDE has been overhauled. Support includes JavaScript framework-specific syntax coloring, code completion, as well as other editing and refactoring tools for the following JavaScript frameworks: jQuery, JSON, Knockout, Ext Js, AngularJS, JsDoc, ExtDoc, and ScriptDoc. Knockout, AngularJS, and ExtJS are supported for the first time.  

6. SASS and LESS. Editing support for Sassy CSS and LESS CSS preprocessors is provided, including syntactic and semantic coloring for language constructs, indentation, reformatting, code folding, and file templates. Code completion and refactoring tools are available for variables and mixins.

7. Maven. NetBeans IDE is famous for its natural and intuitive Maven support. The previous release introduced the effective POM view, and some of the many enhancements in NetBeans IDE 7.4 include reworked compile on save, as well as a brand new Build Execution Overview dialog to easily track the goals that have been executed.

8. Java Editor. The Java Editor, always the centerpiece of NetBeans IDE, is significantly more responsive than before. It sports a range of completely new and creative features, such as enhanced code completion, Java code metrics, and many new hints and refactorings.

9. Versioning. Out of the box, NetBeans IDE supports Mercurial, Subversion, and Git. Significant enhancements have been made in the support for all three tools, including fine-tuned diffing and reintegration between branches.

10. Task Manager. JIRA and Bugzilla issues can now be tracked within the new Task Management window. Create new issues within the IDE, work with them directly as you code, and simplify the development flow by having your code and your issues available to you within the same IDE.

More details can be found here: http://wiki.netbeans.org/NewAndNoteworthyNB74

 

Fixing Java Production Problems with Application Performance Management

Fixing Java Production Problems with Application Performance Management

Tracking down issues in a production system can be a nightmare, but application performance management systems such as New Relic – which combine isolated log files and network and database monitoring – can help.
Finding and fixing issues in a production system can be really difficult. Usually by the time the problem is visible, users are already complaining. Fixing these problems under the eye of management is no fun for anybody, especially when you don't know where the problems may be.
You may or may not have access to the servers in question, and you may have to diagnose an issue involving multiple servers. And sometimes there’s a third party involved, such as a database administrator (DBA) or hosting company, for whom your problem is not a priority. Depending on how detailed your log files are, you might be able to search through them and find some hints. It may also be that your code is using third party jars, and they may not log the level of detail you need.

How APM can help

It's often possible to derive useful information from log files, network monitoring, database server monitoring, and the like. The problem there is that you're trying to infer things about your code's behavior from the information that you’ve already decided to log. If you change your logging to add more information, it's too late. The error has already happened.
Application Performance Management (APM) systems allow you to remotely instrument your code and log data to an external system continuously. This is advantageous for several reasons. Since this data collection and logging is happening in the background, you don’t need to think about logging metrics during software development. When you need information about the performance of your software in production, the information has already been gathered for you during the normal operation of the system. It has been gathered under real system load on the actual production environment, as opposed to data from a test system under simulated load. It also means that when an error occurs in production, such as a performance problem or a threading problem, data about it has already been gathered and is already available.
In addition to providing help diagnosing problems, an APM system can provide more visibility into your code's performance and usage patterns by providing metrics about which pages are accessed the most often and how much time the server is taking to generate those pages. Once a page has been identified as needing improvement, an APM can help you drill in and see where the server is spending the most time. This lets you can prioritize your fixes.
For example, this page shows statistics about our office’s site that shows people’s contact information. It’s a small site, but it gives a feel for what APM can tell you. We see usage spikes, and can see how much time is being spent in application code versus database code. And it’s identified in Figure 1 that the PeopleController#phonenumbers page is the slowest on the site.
Figure 1: Summary dashboard: Shows general statistics about an app in New Relic
In this article, I’ll demonstrate using New Relic's APM system to help identify production performance issues. I created a demo app with a single servlet that takes in a first name and last name and searches for entries with that name in a database using Hibernate. Adding APM to a system is fairly simple: to get started, I only had to set up an additional directory containing code and configuration, which contains the contents of a zip file downloaded from New Relic.

6:/opt/local/apache-tomcat-7.0.34/newrelic% ls
CHANGELOG               newrelic-extension-example.xml
LICENSE         newrelic-extension.xsd
README.text     newrelic.jar
logs            newrelic.yml
newrelic-api.jar
7:/opt/local/apache-tomcat-7.0.34/newrelic%

After the directory is created, you can activate New Relic with a simple change to the launch script. In this case, the change is in Tomcat’s catalina.sh script.

# ---- New Relic switch automatically added to start command on 2013 Jan 08, 11:43:26

NR_JAR=/opt/local/apache-tomcat-7.0.34/newrelic/newrelic.jar; export NR_JAR

JAVA_OPTS="$JAVA_OPTS -javaagent:$NR_JAR"; export JAVA_OPTS


Figure 2: Web Transactions page showing four very slow servlet calls
Once your server has been launched with this new flag (see Figure 2), it will report data to New Relic. The data can then be mined to help you monitor your code as it runs.
In this case, the performance problem seen in Figure 3 is easy to spot. My single servlet is taking between 8000 and 9000 milliseconds every time it runs.
Figure 3: Transaction Trace page showing where time was spent in a specific servlet call
The dashboard shows us that the issue lies with the QueryServlet that’s taking a long time to run. It’s revealed to be a database query that is taking all but 6ms of the slow request. Since I used Hibernate in my persistence layer, it’s generating SQL for me. Tweaking the SQL code may not be so simple a task (Figure 4).
Drilling a little deeper shows us exactly which query was slow:

select person0_.id as id0_, person0_.fname as fname0_, person0_.lname as lname0_, person0_.middlename as middlename0_ from person person0_ where frame+? and lname=?
Figure 4: SQL Detail Tab on the Transaction Trace showing the SQL as captured by New Relic
Now I can send this query to my DBA and ask what can be done to make that query run faster (Figure 5). It turns out to be a simple fix. The query is only against a single table which has over 21 million rows, and none of the columns in the 'where' clause of the query have indexes.
Figure 5: DBA tool showing that the table being queried isn’t indexed for our query
The DBA has added some indexes to the table. Now I can run the app again and see the results of the change in Figure 6.
Figure 6: Transaction Trace showing the same servlet call after the table indexes were added

Conclusion

We improved the system response time from 8470ms to 20ms, a huge improvement in a simple case. But most importantly, I was able to get all the information I needed in an organized fashion in the browser. I didn’t waste any time logging into servers, viewing log files or anything like that. I also didn’t need to change anything in my source code to enable this data collection. I added the New Relic jar to the server launch scripts, and after that, my server logged information to New Relic in the background. From the New Relic website, I was able to track down my performance problem. I drilled through to the slow web transaction, looked at different parts of the transaction to see what was the slowest, and acted on those results.
This was a simple demonstration where the fix was obvious once the slow query was identified, but it illustrates the value of app performance management. Not only can it be used to find performance problems, it can also be used to measure your app in your production environment so you can know where to spend your time and money to make your system better.
Author Bio: Dan has been writing Java code since 1996, and is currently a senior software engineer at New Relic in Portland. When he is not at work, he enjoys playing with trains with his son and writing model train related software for his iPhone.
This article first appeared in JAX Magazine: Socket to Them. For other previous issues, click here.

 

JBoss unveil new nonblocking web server Undertow

JBoss unveil new nonblocking web server Undertow

Chris Mayer
Two months on from a radical name change to WildFly, the application server formerly known as JBoss AS has a new “high performance” and “nonblocking” server standing alongside it.
Revealed by project lead Stuart Douglas earlier today, the web server Undertow contains HTTP upgrade support, which means Java developers are able to allow multiple protocols to be multiplexed over the web port, without losing any performance. Douglas also promised support for WebSockets, including the recent Java EE 7 JSR 356, and the ability to mix handlers in Servlet 3.1.
Douglas explained that Undertow “starts in milliseconds” either embedded in an application or standalone, and contains “a fluent builder API for building deployments,” making it ideal for unit testing.
Although this is the first beta, performance appears to have been on the agenda from the start. Third party benchmarks, conducted by Tech Empower, show that Undertow outperforms rival Java web servers, including Netty and Gemini, when testing plain text and JSON responses for Hello World (shown below).
“One of the goals of Undertow was to have great performance and I am happy to say so far we have surpassed our own expectations,” Douglas added.
Listing: Hello World example
public class HelloWorldServer {

   public static void main(final String[] args) {
       Undertow server = Undertow.builder()
               .addListener(8080, "localhost")
               .setDefaultHandler(new HttpHandler() {
                   @Override
                   public void handleRequest(final HttpServerExchange exchange) throws Exception {
                       exchange.getResponseHeaders().put(Headers.CONTENT_TYPE, "text/plain");
                       exchange.getResponseSender().send("Hello World");
                   }
               }).build();
       server.start();
   }
}
Undertow has a “composition-based architecture” meaning developers can build a web server from single purpose handlers. The 1MB core jar for Undertow keeps it in line with JBoss’s penchant for lightweight projects. At runtime, the team claim a simple embedded server uses less than 4MB of heap space.
Undertow is the new default web server in WildFly, and the team advise this as the best method of getting your hands on it. They’re also looking for GitHub contributors and others to join the mailing list.

SpringSource release asynchronous framework ‘Reactor’

SpringSource release asynchronous framework ‘Reactor’

 

 SpringSource have released a “foundational framework” for asynchronous JVM applications called Reactor. Available on GitHub, it’s designed for event and data-driven applications that require very high throughput and will be used in upcoming big data project Spring XD.

 

SpringSource engineer Jon Brisbin, unveiling the project on the company’s blog, described Reactor as “a set of tools to not just develop but compose applications in a way that more efficiently uses system resources”.
Since the primary purpose of asynchronous frameworks is to provide high scalability and speed, and it should come as no surprise that one key selling point of Reactor is its impressive I/O. Brisbin said that “on modest hardware, it's possible to process over 15,000,000 events per second with the fastest non-blocking Dispatcher”. Optional use of lambdas, to be introduced in Java 8 next year, provides even higher throughput.
In addition, many asynchronous applications suffer from “callback hell” in its client-side JavaScript, wrote Brisbin. Reactor is said to be designed to significantly reduce this type of messy nested code.
However, Reactor is somewhat late to the asynchronous framework party: the JVM alone boasts the Atmosphere framework and Vert.x, the latter of which is now backed by the Eclipse Foundation. All are heavily inspired by node.js, which has won praise for its event-driven, asynchronous server-side implementation of JavaScript.
Members of SpringSource clarified in the comments that Vert.x - which sister company VMware was accused of hanging onto after its creator left - was not a competitor of Reactor, since “Reactor isn't providing a full async stack for web development”. However, it may be difficult to gain traction in an increasingly-crowded space.

Java EE 7 and JAX-RS 2.0

Java EE 7 with JAX-RS 2.0 brings several useful features, which further simplify development and lead to the creation of even more-sophisticated, but lean, Java SE/EE RESTful applications. Adam Bien introduces us to one of the key features of Java EE 7. Reprinted with permission from the Oracle Technology Network, Oracle Corporation.

Sample Code
Most Java EE 6 applications, with the requirement for a remote API and free choice, are using a more or less RESTful flavor of the JAX-RS 1.0 specification. Java EE 7 with JAX-RS 2.0 brings several useful features, which further simplify development and lead to the creation of even more-sophisticated, but lean, Java SE/EE RESTful applications.

Roast House

Roast House is a Java-friendly but simplistic JAX-RS 2.0 example, which manages and roasts some coffee beans. The roast house itself is represented as a CoffeeBeansResource. The URI "coffeebeans" uniquely identifies the CoffeeBeansResource (see Listing 1).
Listing 1
//...
import javax.annotation.PostConstruct;
import javax.enterprise.context.ApplicationScoped;
import javax.ws.rs.DELETE;
import javax.ws.rs.GET;
import javax.ws.rs.POST;
import javax.ws.rs.Path;
import javax.ws.rs.PathParam;
import javax.ws.rs.container.ResourceContext;
import javax.ws.rs.core.Context;
import javax.ws.rs.core.Response;
@ApplicationScoped
@Path("coffeebeans")
public class CoffeeBeansResource {
    
    @Context
    ResourceContext rc;
    
    Map<String, Bean> bc;

    @PostConstruct
    public void init() {
        this.bc = new ConcurrentHashMap<>();
    }

    @GET
    public Collection<Bean> allBeans() {
        return bc.values();
    }

    @GET
    @Path("{id}")
    public Bean bean(@PathParam("id") String id) {
        return bc.get(id);
    }

    @POST
    public Response add(Bean bean) {
        if (bean != null) {
            bc.put(bean.getName(), bean);
        }
        final URI id = URI.create(bean.getName());
        return Response.created(id).build();
    }

    @DELETE
    @Path("{id}")
    public void remove(@PathParam("id") String id) {
        bc.remove(id);
    }
    
    @Path("/roaster/{id}")
    public RoasterResource roaster(){
        return this.rc.initResource(new RoasterResource());
    }
}

As in the previous JAX-RS specifications, a resource can be a @Singleton or @Stateless EJB. In addition, all root resources, providers, and Application subclasses can be deployed as managed or CDI-managed beans. Injection is also available in all extensions annotated with the @Provider annotation, which simplifies integration with existing code. JAX-RS–specific components can be also injected into sub-resources using the ResourceContext:

Listing 2

    @Context
    ResourceContext rc;

    @Path("/roaster/{id}")
    public RoasterResource roaster(){
        return this.rc.initResource(new RoasterResource());
    }

Interestingly the javax.ws.rs.container.ResourceContext not only allows you to inject JAX-RS information into an existing instance, but also provides you access to the resource classes with the ResourceContext#getResource(Class<T> resourceClass) method. Injection points of instances passed to the ResourceContext#initResource method are set with values from the current context by the JAX-RS runtime. The field String id in the RoasterResource class (shown in Listing 3) receives the value of the path parameter of the parent's resource:
Listing 3
public class RoasterResource {

    @PathParam("id")
    private String id;

    @POST
    public void roast(@Suspended AsyncResponse ar, Bean bean) {
        try {
            Thread.sleep(2000);
        } catch (InterruptedException ex) {
        }
        bean.setType(RoastType.DARK);
        bean.setName(id);
        bean.setBlend(bean.getBlend() + ": The dark side of the bean");
        Response response = Response.ok(bean).header("x-roast-id", id).build();
        ar.resume(response);
    }
}

The parameter javax.ws.rs.container.AsyncResponse is similar to the Servlet 3.0 javax.servlet.AsyncContext class and allows asynchronous request execution. In the above example, the request is suspended for the processing duration and the response is pushed to the client with the invocation of the method AsyncResponse#resume. The method roast is still executed synchronously, so the asynchronous execution does not bring any asynchronous behavior at all. However, the combination of EJB's @javax.ejb.Asynchronous annotation and the @Suspended AsyncResponse enables asynchronous execution of business logic with eventual notification of the interested client. Any JAX-RS root resource can be annotated with @Stateless or @Singletonannotations and can, in effect, function as an EJB (see Listing 4):
Listing 4
import javax.ejb.Asynchronous;
import javax.ejb.Singleton;

@Stateless
@Path("roaster")
public class RoasterResource {

    @POST
    @Asynchronous
    public void roast(@Suspended AsyncResponse ar, Bean bean) {
    //heavy lifting
        Response response = Response.ok(bean).build();
        ar.resume(response);
    }
}

An @Asynchronous resource method with an @Suspended AsyncResponse parameter is executed in a fire-and-forget fashion. Although the request-processing thread is freed immediately, the AsyncResponse still provides a convenient handle to the client. After the completion of time-consuming work, the result can be conveniently pushed back to the client. Usually, you would like to separate JAX-RS– specific behavior from the actual business logic. All business logic could be easily extracted into a dedicated boundary EJB, but CDI eventing is even better suited for covering the fire-and-forget cases. The custom event class RoastRequest carries the payload (Bean class) as processing input and the AsyncResponse for the resulting submission (see Listing 5):
Listing 5
public class RoastRequest {

    private Bean bean;
    private AsyncResponse ar;

    public RoastRequest(Bean bean, AsyncResponse ar) {
        this.bean = bean;
        this.ar = ar;
    }

    public Bean getBean() {
        return bean;
    }

    public void sendMessage(String result) {
        Response response = Response.ok(result).build();
        ar.resume(response);
    }

    public void errorHappened(Exception ex) {
        ar.resume(ex);
    }
}

CDI events not only decouple the business logic from the JAX-RS API, but also greatly simplify the JAX-RS code (see Listing 6):
Listing 6
public class RoasterResource {

    @Inject
    Event<RoastRequest> roastListeners;

    @POST
    public void roast(@Suspended AsyncResponse ar, Bean bean) {
        roastListeners.fire(new RoastRequest(bean, ar));
    }
}

Any CDI managed bean or EJB could receive the RoastRequest in a publish-subscribe style and synchronously or asynchronously process the payload with a simple observer method: void onRoastRequest(@Observes RoastRequest request){}.
With the AsyncResponse class the JAX-RS specification introduces an easy way to push information to HTTP in real time. From the client perspective, the asynchronous request on the server is still blocking and so synchronous. From the REST-design perspective, all long-running tasks should return immediately with the HTTP status code 202 along with additional information about how to get the result after the processing completes.

The Return of Aspects

Popular REST APIs often require their clients to compute a fingerprint of the message and send it along with the request. On the server side, the fingerprint is computed and compared with the attached information. If both don't match, the message gets rejected. With the advent of JAX-RS and the introduction of javax.ws.rs.ext.ReaderInterceptor javax.ws.rs.ext.WriterInterceptor, the traffic can be intercepted on the server side and even on the client side. An implementation of the ReaderInterceptor interface on the server wraps the MessageBodyReader#readFrom and is executed before the actual serialization.
The PayloadVerifier fetches the signature from the header, computes the fingerprint from the stream, and eventually invokes theReaderInterceptorContext#proceed method, which invokes the next interceptor in the chain or the MessageBodyReader instance (see Listing 7).
Listing 7
public class PayloadVerifier implements ReaderInterceptor{

    public static final String SIGNATURE_HEADER = "x-signature";

    @Override
    public Object aroundReadFrom(ReaderInterceptorContext ric) throws IOException, 
WebApplicationException {
        MultivaluedMap<String, String> headers = ric.getHeaders();
        String headerSignagure = headers.getFirst(SIGNATURE_HEADER);
        InputStream inputStream = ric.getInputStream();
        byte[] content = fetchBytes(inputStream);
        String payload = computeFingerprint(content);
        if (!payload.equals(headerSignagure)) {
            Response response = Response.status(Response.Status.BAD_REQUEST).header(
            SIGNATURE_HEADER, "Modified content").build();
            throw new WebApplicationException(response);
        }
        ByteArrayInputStream buffer = new ByteArrayInputStream(content);
        ric.setInputStream(buffer);
        return ric.proceed();
    }
    //...    
}

Modified content results in different fingerprints and causes the raising of the WebApplicationException with the BAD_REQUEST (400) response code.
All the computation for fingerprints or for outgoing requests can be easily automated with an implementation of theWriterInterceptor. An implementation of the WriterInterceptor wraps MessageBodyWriter#writeTo and is executed before the serialization of the entity into a stream. For the fingerprint computation, the final representation of the entity "on-the-wire" is needed, so we pass a ByteArrayOutputStream as a buffer, invoke the WriterInterceptorContext#proceed() method, fetch the raw content and compute the fingerprint. See Listing 8.
Listing 8
public class PayloadVerifier implements WriterInterceptor {
    public static final String SIGNATURE_HEADER = "x-signature";

   @Override
    public void aroundWriteTo(WriterInterceptorContext wic) throws IOException, 
WebApplicationException {
        OutputStream oos = wic.getOutputStream();
        ByteArrayOutputStream baos = new ByteArrayOutputStream();
        wic.setOutputStream(baos);
        wic.proceed();
        baos.flush();
        byte[] content = baos.toByteArray();
        MultivaluedMap<String, Object> headers = wic.getHeaders();
        headers.add(SIGNATURE_HEADER, computeFingerprint(content));
        oos.write(content);

    }
    //...
}

Finally, the computed signature is added as a header to the request, the buffer is written to the original stream, and the whole request is sent to the client. Of course, a single class can also implement both interfaces at the same time:
Listing 9
import javax.ws.rs.ext.Provider;
@Provider
public class PayloadVerifier implements ReaderInterceptor, WriterInterceptor {
}

As in the previous JAX-RS releases, custom extensions are going to be automatically discovered and registered with the @Providerannotation. For the interception of MessageBodyWriter and MessageBodyReader instances, only the implementations of theReaderInterceptor and WriterInterceptor have to be annotated with the @Provider annotation—no additional configuration or API calls are required.

Request Interception

An implementation of a ContainerRequestFilter and ContainerResponseFilter intercepts the entire request, not only the process of reading and writing of entities. The functionality of both interceptors is far more useful than logging of the information contained in raw javax.servlet.http.HttpServletRequest instance. The class TrafficLogger is not only able to log the information contained in the HttpServletRequest, but also to trace the information about the resources matching a particular request, as shown in Listing 10.
Listing 10
@Provider
public class TrafficLogger implements ContainerRequestFilter, ContainerResponseFilter {

    //ContainerRequestFilter
    public void filter(ContainerRequestContext requestContext) throws IOException {
        log(requestContext);
    }
    //ContainerResponseFilter
    public void filter(ContainerRequestContext requestContext, ContainerResponseContext 
                                                 responseContext) throws IOException {
        log(responseContext);
    }

    void log(ContainerRequestContext requestContext) {
        SecurityContext securityContext = requestContext.getSecurityContext();
        String authentication = securityContext.getAuthenticationScheme();
        Principal userPrincipal = securityContext.getUserPrincipal();
        UriInfo uriInfo = requestContext.getUriInfo();
        String method = requestContext.getMethod();
        List<Object> matchedResources = uriInfo.getMatchedResources();
        //...
    }

    void log(ContainerResponseContext responseContext) {
        MultivaluedMap<String, String> stringHeaders = responseContext.getStringHeaders();
        Object entity = responseContext.getEntity();
    //...
    }
}

Accordingly, a registered implementation of the ContainerResponseFilter gets an instance of the ContainerResponseContextand is able to access the data generated by the server. Status codes and header contents, for example, the Location header, are easily accessible. ContainerRequestContext as well as ContainerResponseContext are mutable classes which can be modified by the filters.
Without any additional configuration the ContainerRequestFilter is executed after the HTTP-resource matching phase. At this point in time it is no longer possible to modify the incoming request in order to customize the resource binding. In case you would like to influence the binding between the request and a resource, a ContainerRequestFilter can be configured to be executed before the resource binding phase. Any ContainerRequestFilter annotated with the javax.ws.rs.container.PreMatching annotation is executed before the resource binding, so the HTTP request contents can be tweaked for the desired mapping. A common use case for the @PreMatching filters is adjusting the HTTP verbs to overcome limits in the networking infrastructure. More "esoteric" methods likePUTOPTIONSHEAD, or DELETE may be filtered out by firewalls or not supported by some HTTP clients. @PreMatching ContainerRequestFilter implementation could fetch the information from the header (for example, "X-HTTP-Method-Override") indicating the desired HTTP verb and could change a POST request into a PUT on the fly (see Listing 11).
Listing 11
@Provider
@PreMatching
public class HttpMethodOverrideEnabler implements ContainerRequestFilter {

    public void filter(ContainerRequestContext requestContext) throws IOException {
        String override = requestContext.getHeaders()
                .getFirst("X-HTTP-Method-Override");
        if (override != null) {
            requestContext.setMethod(override);
        }
    }
}

Configuration

All interceptors and filters registered with the @Provider annotation are globally enabled for all resources. At deployment time the server scans the deployment units for @Provider annotations and automatically registers all extensions before the activation of the application. All extensions can be packaged into dedicated JARs and deployed on demand with the WAR (in the WEB-INF/lib folder). The JAX-RS runtime would scan the JARs and automatically register the extensions. Drop-in deployment of self-contained JARs is nice, but requires fine-grained extension packaging. All extensions contained within a JAR would be activated at once.
JAX-RS introduces binding annotations for selective decoration of resources. The mechanics are similar to CDI qualifiers. Any custom annotation denoted with the meta-annotation javax.ws.rs.NameBinding can be used for the declaration of interception points:
Listing 12
@NameBinding
@Retention(RetentionPolicy.RUNTIME)
@Target({ElementType.TYPE, ElementType.METHOD})
public @interface Tracked {
}

All interceptors or filters denoted with the Tracked annotation can be selectively activated by applying the same Tracked annotation on classes, methods, or even subclasses of the Application:
Listing 13
@Tracked
@Provider
public class TrafficLogger implements ContainerRequestFilter, ContainerResponseFilter {
}

Custom NameBinding annotations can be packaged together with the corresponding filter or interceptor and selectively applied to resources by the application developer. Although the annotation-driven approach significantly increases flexibility and allows coarser plug-in packages, the binding is still static. The application needs to be recompiled and effectively redeployed to change the interceptor or filter chain.
In addition to the global and annotation-driven configuration of cross-cutting functionality, JAX-RS 2.0 also introduces a new API for dynamic extension registration. An implementation of the javax.ws.rs.container.DynamicFeature interface annotated with the@Provider annotation is used by the container as a hook for the registration of interceptors and filters dynamically, without the need for recompilation. The LoggerRegistration extension conditionally registers the PayloadVerifier interceptor and TrafficLoggerfilter by querying the existence of predefined system properties, as shown in Listing 14:
Listing 14
@Provider
public class LoggerRegistration implements DynamicFeature {

    @Override
    public void configure(ResourceInfo resourceInfo, FeatureContext context) {
        String debug = System.getProperty("jax-rs.traffic");
        if (debug != null) {
            context.register(new TrafficLogger());
        }
        String verification = System.getProperty("jax-rs.verification");
        if (verification != null) {
            context.register(new PayloadVerifier());
        }
    }
}  

The Client Side

The JAX-RS 1.1 specification did not cover the client. Although proprietary implementations of a client REST API, such as RESTEasy or Jersey, could communicate with any HTTP resource (not even implemented with Java EE), the client code was directly dependent on the particular implementation. JAX-RS 2.0 introduces a new, standardized Client API. Using a standardized bootstrapping, the Service Provider Interface (SPI) is replaceable. The API is fluent and similar to the majority of the proprietary REST client implementations (see Listing 15).
Listing 15
import java.util.Collection;
import javax.ws.rs.client.Client;
import javax.ws.rs.client.ClientBuilder;
import javax.ws.rs.client.Entity;
import javax.ws.rs.client.WebTarget;
import javax.ws.rs.core.GenericType;
import javax.ws.rs.core.MediaType;
import javax.ws.rs.core.Response;

public class CoffeeBeansResourceTest {

    Client client;
    WebTarget root;

    @Before
    public void initClient() {
        this.client = ClientBuilder.newClient().register(PayloadVerifier.class);
        this.root = this.client.target("http://localhost:8080/roast-house/api/coffeebeans");
    }

    @Test
    public void crud() {
        Bean origin = new Bean("arabica", RoastType.DARK, "mexico");
        final String mediaType = MediaType.APPLICATION_XML;
        final Entity<Bean> entity = Entity.entity(origin, mediaType);
        Response response = this.root.request().post(entity, Response.class);
        assertThat(response.getStatus(), is(201));

        Bean result = this.root.path(origin.getName()).request(mediaType).get(Bean.class);
        assertThat(result, is(origin));
        Collection<Bean> allBeans = this.root.request().get(
new GenericType<Collection<Bean>>() {
        });
        assertThat(allBeans.size(), is(1));
        assertThat(allBeans, hasItem(origin));

        response = this.root.path(origin.getName()).request(mediaType).delete(Response.class);
        assertThat(response.getStatus(), is(204));

        response = this.root.path(origin.getName()).request(mediaType).get(Response.class);
        assertThat(response.getStatus(), is(204));
    }
//..
}

In the integration test above, the default Client instance is obtained using the parameterless ClientFactory.newClient() method. The bootstrapping process itself is standardized with the internal javax.ws.rs.ext.RuntimeDelegate abstract factory. Either an existing instance of RuntimeDelegate is injected (by, for example, a Dependency Injection framework) into the ClientFactory or it is obtained by looking for a hint in the files META-INF/services/javax.ws.rs.ext.RuntimeDelegate and${java.home}/lib/jaxrs.properties and eventually by searching for the javax.ws.rs.ext.RuntimeDelegate system property. By a failed discovery, a default (Jersey) implementation is attempted to initialize.
The main purpose of a javax.ws.rs.client.Client is the enabling of fluent access to javax.ws.rs.client.WebTarget orjavax.ws.rs.client.Invocation instances. A WebTarget represents a JAX-RS resource and an Invocation is a ready-to-use request waiting for submission. WebTarget is also an Invocation factory.
In the method CoffeBeansResourceTest#crud() the Bean object is passed back and forth between client and server. With the choice of MediaType.APPLICATION_XML, only a few JAXB annotations are needed to send and receive a DTO serialized in an XML document:
Listing 16
@XmlRootElement
@XmlAccessorType(XmlAccessType.FIELD)
public class Bean {

    private String name;
    private RoastType type;
    private String blend;

}

The names of the class and attributes have to match for successful marshaling with the server's representation, but the DTO does not have to be binary compatible. In the above example, both Bean classes are located in different packages and even implement different methods. A desired MediaType is passed to the WebTarget#request() method, which returns an instance of a synchronousInvocation.Builder. The final invocation of a method named after the HTTP verbs (GETPOSTPUTDELETEHEADOPTIONS, orTRACE) initiates a synchronous request.
The new client API also supports asynchronous resource invocation. As mentioned earlier, an Invocation instance decouples the request from the submission. An asynchronous request can be initiated with the chained async() method invocation, which returns anAsyncInvoker instance. See Listing 17.
Listing 17
    @Test
    public void roasterFuture() throws Exception {
    //...
        Future<Response> future = this.root.path("roaster").path("roast-id").request().async().post(entity);
        Response response = future.get(5000, TimeUnit.SECONDS);
        Object result = response.getEntity();
        assertNotNull(result);
        assertThat(roasted.getBlend(),containsString("The dark side of the bean"));
    }

There is not a lot of benefit in the "quasi-asynchronous" communication style in the above example—the client still has to block and wait until the response arrives. However, the Future-based invocation is very useful for batch-processing: the client can issue several requests at once, gather the Future instances, and process them later.
A truly asynchronous implementation can be achieved with a callback registration, as shown in Listing 18:
Listing 18
    @Test
    public void roasterAsync() throws InterruptedException {
    //...
        final Entity<Bean> entity = Entity.entity(origin, mediaType);
        this.root.path("roaster").path("roast-id").request().async().post(
entity, new InvocationCallback<Bean>() {
            public void completed(Bean rspns) {
            }

            public void failed(Throwable thrwbl) {
            }
        });
    }

For each method returning a Future, there is also a corresponding callback method available. An implementation of theInvocationCallback interface is accepted as the last parameter of the method (post(), in the example above) and is asynchronously notified upon successful invocation with the payload or—in a failure case—with an exception.
An automated construction of URIs can be streamlined with the built-in templating mechanism. Predefined placeholders can be replaced shortly before the request execution and save repetitive creation of WebTarget instances:
Listing 19
    @Test
    public void templating() throws Exception {
        String rootPath = this.root.getUri().getPath();
        URI uri = this.root.path("{0}/{last}").
                resolveTemplate("0", "hello").
                resolveTemplate("last", "REST").
                getUri();
        assertThat(uri.getPath(), is(rootPath + "/hello/REST"));
    }

A small, but important detail: on the client side, extensions are not discovered at initialization time; rather, they have to be explicitly registered with the Client instance: ClientFactory.newClient().register(PayloadVerifier.class). However, the same entity interceptor implementations can be shared between client and server, which simplifies testing, reduces potential bugs, and increases productivity. The already introduced PayloadVerifier interceptor can be reused without any change on the client side as well.

Conclusion: Java EE or Not?

Interestingly, JAX-RS does not even require a full-fledged application server. After fulfilling the specified Context Types, a JAX-RS 2.0–compliant API can be anything. However, the combination with EJB 3.2 brings asynchronous processing, pooling (and so throttling), and monitoring. Tight integration with Servlet 3+ comes with efficient asynchronous processing of @Suspended responses throughAsyncContext support and CDI runtime brings eventing. Also Bean Validation is well integrated and can be used for the validation of resource parameters. Using JAX-RS 2.0 together with other Java EE 7 APIs brings the most convenient (=no configuration) and most productive (=no re-invention) way of exposing objects to remote systems.

See Also

Author Bio: Consultant and author Adam Bien is an Expert Group member for the Java EE 6/7, EJB 3.X, JAX-RS, and JPA 2.X JSRs. He has worked with Java technology since JDK 1.0 and with servlets/EJB 1.0 and is now an architect and developer for Java SE and Java EE projects. He has edited several books about JavaFX, J2EE, and Java EE, and he is the author of Real World Java EE Patterns—Rethinking Best Practices and Real World Java EE Night Hacks. Adam is also a Java Champion, Top Java Ambassador 2012, and JavaOne 2009, 2011, and 2012 Rock Star. Adam organizes occasional Java (EE) workshops at Munich's Airport.
Reprinted with permission from the Oracle Technology Network, Oracle Corporation

Arun Gupta on Higher Productivity from Embracing HTML5 with Java EE 7

Oracle Java EE Expert Arun Gupta provides glimpses into Java EE 7. Reprinted with permission from the Oracle Technology Network, Oracle Corporation.

At the annual IOUC (International Oracle User Community) Summit, held January 14–16, 2013, at Oracle headquarters in Redwood Shores, California, more than 100 top user group leaders from around the world gathered to share best practices, provide feedback, and receive updates from leading Oracle developers.
Of particular note was a session by Oracle Java evangelist and Java EE expert Arun Gupta entitled "The Java EE 7 Platform: Higher Productivity and Embracing HTML5," which offered a window into the rich possibilities that will be available in Java EE 7 upon its release in the spring of 2013. Gupta took the attendees through some major developments in Java EE 7:
  • Java API for RESTful Web Services 2.0
  • Java Message Service 2.0
  • Java API for JSON Processing 1.0
  • Java API for WebSocket 1.0
  • Bean Validation 1.1
  • Batch Applications for the Java Platform 1.0
  • Java Persistence API 2.1
  • Servlet 3.1
  • Concurrency Utilities for Java EE 1.0
  • JavaServer Faces 2.2
In addition, he discussed the implementation status of Java EE 7, looked ahead to Java EE 8, and addressed the challenges and potential pitfalls involved in establishing a standards-based cloud programming model.
Gupta began the session with a brief discussion of Java EE 6, released on December 10, 2009. The intervening years have seen more than 50 million Java EE 6 component downloads, establishing it as the world's premier application development platform. The success of Java EE 6 constitutes the fastest implementation of a Java EE release thus far.
"In addition to Oracle, companies such as IBM, SAP, Hitachi, Red Hat, Fujitsu, and Caucho have adopted it," said Gupta. "There are 18 implementations of compliant Java EE 6 application servers today. Every month or so, we see a new vendor coming in."
He pointed out that Java EE 7 offers higher productivity; less boilerplate; richer functionality; more default options; and HTML5 support in the form of WebSocket and JSON. Although Java EE 7 offers a standards-based platform with a lot of innovation, Gupta observed, "There are not enough standards in the cloud with W3C and other standards bodies. More standards are needed so that we can define a Java API for the cloud. Premature standardization can also be a problem if not enough innovation has taken place. So what is the right thing for the platform? We have reached out to the community, the core group members, and the executive committee of the Java Community Process and have focused on providing higher productivity and on embracing the HTML5 platform more closely. We are going to use dependency injection a lot more, which will give developers the ability to write less boilerplate code and offer richer functionality such as batch applications and caching. Similarly, for HTML5, we are embracing WebSocket functionality and the ability to parse and generate a JSON structure. We are providing support for HTML5-friendly markup as part of JSF."
He then discussed the most important features of Java EE 7.

JSR 339: Java API for RESTful Web Services 2.0

"We are building several new features in JAX-RS 1.0," explained Gupta. "We are introducing a new client API in 2.0 that will enable you to invoke a REST endpoint in a standard way. We are providing extension points, methods filters, and entity interceptors that improve how to do request and response and how to do pre- and postprocessing very easily, which will be useful in addressing cross-cutting concerns such as logging or security, which you can easily do as part of your REST endpoint."
Other new developments include asynchronous processing for both the server and the client, enabling more-scalable applications, hypermedia support, common configuration to simplify the REST endpoint, and more.
For more on JAX-RS 2.0, check out Gupta's blog entry on this topic.

JSR 343: Java Message Service 2.0

Gupta pointed out that the last version of Java Message Service was released in December 2003—before JDK 1.4. Although JMS is stable and extensively used, he explained that it needed to catch up with subsequent changes in the Java platform such as generics, injection, and annotation. Java EE 7 leverages the new functionality to improve the way developers write JMS code. With JMS 2.0, developers will use less boilerplate code and will be able to take advantage of resource injection, meaning greater functionality and more-effective, simpler code.
Gupta demonstrated how it took 20 lines to send a simple message with JMS 1.1 whereas with JMS 2.0, it could be done in 6 lines with cleaner and more semantically readable code. "This is a huge improvement," he remarked.

JSR 353: Java API for JSON Processing 1.0

Gupta turned to Java API for JSON Processing 1.0. whose original purpose was to parse and generate JSON when it was first introduced into the platform. Gupta remarked, "If you are familiar with JAXP, you know that it has an event-driven StAX streaming API and a DOM-based API. The same analogy applies to Java API for JSON Processing: it has a streaming API and an object API."
Streaming API
  • Provides a low-level, efficient way to parse/generate JSON
  • Provides pluggability for parsers/generators
Object model
  • Simple, easy-to-use, high-level API
  • Implemented on top of streaming API
"What has been missing," said Gupta, "is the ability to take a POJO and place an annotation so that it will automatically do the binding for you as in JAXB. This improvement is forthcoming."

JSR 356: Java API for WebSocket 1.0

JSR 356 defines how to write a standards-based WebSocket application. It defines not just the service endpoint but also ways it can be written with either an annotation-driven or an interface-driven model. A separate client-side API will be introduced as well.
"The WebSocket protocol works by having everything on the wire functioning as a data or control frame," explained Gupta. "So an API or an SPI for data frames can be defined, enabling developers to manipulate the data frames. For example, you can use the SPIs to perform some WebSocket extensions for things such as compression or multiplexing. We are often asked how this can work in a Java SE environment. Whether it gets defined and approved in JSR 356 is to be determined. But Tyrus, the reference implementation for JSR 356, will provide a client profile."

JSR 349: Bean Validation 1.1

Gupta explained that Bean Validation was a new specification introduced in Java EE 6. Instead of defining validation constraints—such as the number of characters allowed in a name or the number of elements in a list—as part of the application logic, they could be more easily defined with annotation. "An annotation," said Gupta, "is put as a constraint on a method, and the Bean Validator, which is part of Java EE, automatically kicks in and honors those constraints."
He explained that constraints can be defined at one point and are automatically honored by the Bean Validator; they can be enabled, disabled, configured, or defined as custom constraints. "In Java EE 6, this was used primarily by Java Persistence API and JavaServer Faces," Gupta remarked. "In Java EE 7, they can be put on EJBs or simple POJOs."

JSR 352: Writing Simple Batch Applications in Java EE 7

JSR 352 offers new functionality in Java EE 7 for non-interactive, bulk-oriented, long-running tasks such as generating monthly bank statements, which involves processing thousands of accounts in the short processing time of the bank's network. "You just define a process, and it runs by itself on schedule," said Gupta. "Most of the concepts of JSR 352 are borrowed from Spring. Tasks can be run in parallel or sequentially. There is a full Java XML that will define what your task will look like, with defined steps and a complete task flow."

JSR 236: Concurrency Utilities for Java EE 1.0

Gupta commented that JSR 236 was filed a long time ago and has recently gained momentum. It defines how you can create, for example, new managed threads. Java EE itself restricts the creation of new threads. Java EE is a managed application, so if the user creates new threads and the container and the runtime have no knowledge of those threads, how does it manage it? JSR 236 will enable developers to create managed threads that are known and managed by the container.

JSR 344: JavaServer Faces 2.2

Reusable flows, HTML5-friendly markup, and resource library contracts are the big-ticket items for JSF 2.2. Faces Flow borrows core concepts from Oracle ADF Task Flow, Spring Web Flow, and Apache MyFaces CODI. It introduces @FlowScoped CDI annotation, for flow local storage, and @FlowDefinition, for defining the flow with CDI producer methods. There are clearly defined entry and exit points with well-defined parameters, so the flow can be reused.

Minor Changes in Servlets and Java Persistence API 2.1

Gupta closed the session with a brief discussion of the minor changes offered by Servlet 3.1 and Java Persistence API 2.1. He pointed out that although the changes in Java EE 7 for these are minor, they each received major overhauls in Java EE 6.

Adopt-a-JSR for Java EE 7

Gupta encouraged developers to participate in Adopt-a-JSR, where they can pursue their interest in particular Java EE 7 JSRs and download code, play with it, report bugs, and offer feedback to Java EE 7 specification leads.

See Also

Author Bio: Janice J. Heiss is the Java acquisitions editor at Oracle and a technology editor at Java Magazine.