Showing posts with label JavaOne 2010. Show all posts
Showing posts with label JavaOne 2010. Show all posts

Saturday, May 28, 2011

JavaFX 2 Beta: Time to Reevaluate JavaFX?

JavaFX 2 beta was released earlier this week. The main JavaFX download page states the following regarding this release:
JavaFX 2.0 Beta is the latest major update release for JavaFX. Many of the new features introduced in JavaFX 2.0 Beta are incompatible with JavaFX 1.3. If you are developing a new application in JavaFX, it is recommended that you start with JavaFX 2.0 Beta.

I posted O JavaFx, What Are Thou? a year ago. Much has changed in JavaFX since then. Most notable of these changes were the JavaOne 2010 announcements related to JavaFX's direction. As I blogged about at JavaOne 2010, I thought Oracle's plans to move to a standard Java API for JavaFX to replace a separate and highly unrelated F3-based JavaFX Script language was the correct decision for any possibility of long-term and wide-spread adoption of the technology. Although a small minority of developers really like JavaFX Script, that direction did not seem to appeal to the vast majority. The good news for those who want to continue using JavaFX Script is the spawning of Visage. I was excited about Oracle's new announced direction for JavaFX, but we all knew that we needed to wait to see if Oracle would deliver.

With the announcement regarding JavaFX 2 beta, a natural question is, "Is it time to reconsider JavaFX?" My previously mentioned post O JavaFX, What Art Thou? brought up several of the issues that concerned developers considering adoption of JavaFX. Most of this post asks those questions again in light of JavaFX 2 beta.


Is JavaFX Java?

One of the questions in that post, "Is JavaFX Java?" seems to be answered somewhat (and more than before) in the affirmative. Because JavaFX 2 will support standard Java APIs, it will at least be "Java" in the sense that any third-party library like Hibernate or the Spring Framework is "Java." It still may not be Java in the sense that it's neither part of the Java SE specification or the Java EE specification.


Is JavaFX Standard?

I've seen nothing to indicate that JavaFX 2 will be any more "standard" than previous versions. As far as I can tell, JavaFX remains a product with no standard specification and only a single implementation (Oracle's). There are no specifications that others might implement.


Is JavaFX Open Source?

This is another question that probably won't be fully answered until JavaFX 2 is formally released in production version. My best guess is that it will consist of a similar mixture of licenses (some open source) as earlier versions did.


What is JavaFX's license?

This is one more question to add to the list of questions to ask again with the formal release of JavaFX 2 non-beta release. My best guess, similar to my guess regarding its open source status, is that JavaFX's licenses will remain somewhat similar to those for previous versions of JavaFX. I also expect JavaFX licensing to follow the Flex/Flash licensing model with the compiler and language tools tending to be open source and the runtime tending to be proprietary.

The Oracle Early Technology Adopter License Terms seem to apply for the beta release. The Charles Humble interview of Richard Bair in JavaFX 2.0 Will Bring Hardware Accelerated Graphics and Better Licensing Terms to the Platform includes brief mention of licensing plans. Regarding licensing of the JavaFX runtime, Bair states, "The JavaFX license is expected to be consistent with the JRE license, which allows such distribution under specific conditions."


How is JavaFX's Deployment?

Although I believe that one of the largest drawbacks of JavaFX adoption in the past was the need to learn another non-Java language in JavaFX Script, there is no question that issues with the deployment model competed for most important disadvantage of JavaFX. Max Katz has stated, "I think JavaFX failed to gain any significant momentum mainly because of deployment problems." He discussed these deployment problems in a separate post.

The crux of the problem with JavaFX deployment seemed to revolve around its applet foundation. In the previously mentioned Max Katz post, he stated:
As the mantra in real estate is: location, location, location. The mantra in JavaFX is: deployment, deployment, deployment. Unfortunately, this is where JavaFX has failed miserably. The original Java applets failed miserably in deployment and JavaFX (which was supposed to be the next applet technology or applets 2.0) has inherited the failed deployment with it. In other words, nothing has really changed since 1995.

Although the "Next Generation in Applet Java Plug-in Technology" did bring improvements to the applet deployment environment, it simply wasn't enough. Developers were widely unhappy about its performance and usability when compared to environments such as Flash, HTML/JavaScript, and Silverlight. Unfortunately, it may be too late to salvage the applet at this point.

Oracle appears to be addressing the deployment issue in JavaFX 2. In Deploying JavaFX Applications, Nancy Hildebrandt writes about "three basic types of [JavaFX] application deployment": the maligned applet, Web Start, and standalone desktop. I'm particularly excited about the non-browser deployment environments.

In her article, Hildebrandt talks about JavaFX 2 Beta Deployment Features that are specific to each deployment environment (improvements for applet/browser and Web Start environments) as well as general deployment features. I particularly like two of these general deployment features that are highly related:
  • "The same JavaFX source code works for applets, Web Start applications, and standalone desktop applications."
  • "The same JavaFX JAR file can be deployed as an applet, a Web Start application, or a standalone application."


Is It Time to Revisit JavaFX?

Since my initial significant disappointment with JavaFX since the now infamous 2007 JavaOne announcement, I have been a skeptic of JavaFX's future. Beginning with the JavaOne 2010 opening keynote announcement regarding Oracle's change of direction for JavaFX, I have begun to think that JavaFX has an outside chance at a real future in application development. Oracle does seem to be delivering on their announced plans. If they continue to do so, JavaFX is likely to be a more compelling choice than it's ever been.

My previous concerns regarding JavaFX all involved a common theme: it had nothing to really distinguish itself from a host of more mature products with wider communities and support. Because JavaFX had its own language in JavaFX Script, it really was no "easier" for a Java developer to learn than Flex/MXML/ActionScript. Flash and Silverlight are proprietary, but so is the JavaFX runtime. JavaFX was (and is) no more standards-based than any of the competitors. Flash and Silverlight have also boasted better runtime experiences than JavaFX. HTML5 is a recent entry to provide formidable competition for JavaFX.

Oracle appears to be changing JavaFX to distinguish itself better from other technologies. By allowing for standard Java APIs and for non-browser JavaFX applications, JavaFX becomes more attractive to the massive Java developer base. JavaFX will have the most difficulty competing in the browser space with Flex/Flash, Silverlight, and HTML5 already entrenching themselves there. However, I think JavaFX can find some success in the Web Start and standalone desktop environments against tools like Adobe AIR. AIR has been aimed at Flex developers who wish to developer desktop applications. JavaFX may just reintroduce the Java advantage of the same code/same JAR being able to run in the browser or on the desktop.

Because I have a lot on my plate already (starting to really use PHP for instance), I'll probably monitor others' reports of their use of JavaFX 2 beta in blog posts and other online forums. Assuming more good reports than bad, I hope to start trying JavaFX out for myself between now and JavaOne 2011 (where I expect to see heavy emphasis on JavaFX coupled with releases and other news).

Although lack of time is my biggest reason for not starting to use JavaFX 2 beta today, there are other reasons that may keep some from starting to use JavaFX 2 beta immediately. First, it is not available for all of the major operating systems platforms. The JavaFX 2.0 Beta System Requirements page states which operating systems and web browsers are currently supported. In general, the supported operating systems are 32-bit and 64-bit versions of Windows (Windows XP, Windows Vista, and Windows 7). Although Safari and Opera are not explicitly listed as supported, recent versions of the three most popular web browsers are explicitly supported: Chrome, Firefox 3.6 and Firefox 4, and Internet Explorer 8. Many of us hope, of course, that JavaFX will ultimately be available not only for other desktop machines (Linux or Mac), but will be available for mobile devices. This can be seen in the comments on Richard Bair's post Is JavaFX 2.0 Cross Platform?

The JavaFX 2.0 Beta System Requirements page also states that JDK 6 Update 24 (perhaps better known for patching a significant security hole) or later is required. In addition, it points out that 32-bit JavaFX runtime and 32-bit JDK are supported on the Windows environments, even for 64-bit Windows versions.

The JavaOne 2010 opening keynote focused largely on continued inclusion of Prism in JavaFX 2 and other desktop Java technologies. The JavaFX 2.0 Beta System Requirements page states which graphics cards are known to be support Prism. JavaFX will work without these graphics cards, but the Java 2D pipeline is used instead of Prism in such cases.


Conclusion

Oracle's JavaOne 2010 plans for JavaFX and their recent release of JavaFX 2.0 beta have piqued my curiosity. Once I have a little more time and once a few of the wrinkles commonly associated with beta software have been ironed out, I do plan to reevaluate JavaFX. This could change based on others' documented experiences, but my current plan is to start using JavaFX in the near future and to definitely be somewhat more familiar with it in time for JavaOne 2011. I also expect there to be significant news and information on JavaFX 2 at JavaOne 2011.

Monday, September 27, 2010

JavaOne 2010: General Observations and Overall Impressions

JavaOne 2010 is over. Some questions have been answered, many in the way that I hoped they would be. Overall, it was a great conference. Here are some of my thoughts regarding JavaOne 2010.

JavaOne's Relationship to Oracle OpenWorld
Although I don't know how the 41,000 attendees break down (especially because some attendees attend portions of both conferences), my guess is that attendees who would say they are primarily attending Oracle OpenWorld probably outnumber attendees who would say they are primarily attending JavaOne by 3 to 1. I believe that Oracle OpenWorld has been larger than JavaOne just about every year in which they've both existed and I believe the difference has been even more significant in recent years.

Some have complained about JavaOne not being treated as significantly as Oracle OpenWorld.  Evidence they can cite includes Oracle OpenWorld getting the Moscone Center and an Oracle OpenWorld keynote kicking off the week (other than MySQL Sunday and a few other activities earlier on Sunday). As far as the Moscone Center goes, logic dictates that the conference with larger number of attendees should be held there. I generally enjoyed the opportunity to walk outside as I went back and forth between the Hilton and Parc 55. I do agree that it required more time than I would have liked to get down to the Moscone Center, but I only needed to do that two or three times.

I thought Oracle did a nice job of giving JavaOne its own conference feel in the Union Square area. The Mason Street Tent had great proximity to the Hilton and Parc 55. The Hilton Grand Ballroom was spacious enough to hold the majority of the JavaOne attendees (with overflow generally being in the Yosemite conference rooms).

There's no question that Oracle OpenWorld is the "big brother" compared to the smaller JavaOne. However, I personally felt that JavaOne Opening Keynote was more interesting than the Oracle OpenWorld keynote. Instead of focusing on the relative aspects of Oracle OpenWorld versus JavaOne, however, I prefer to focus solely on my experience with JavaOne 2010 regardless of how Oracle OpenWorld plays into it.


There are likely significant financial and logistical advantages for Oracle to holding the conferences all at once. If there are, I can live with JavaOne as it was held. However, if Oracle is only holding them together in an attempt to allow attendees to attend all the conferences, I think that's an advantage that only a small minority of the attendees take support of. My guess is that most JavaOne attendees would give up the opportunity to attend Oracle OpenWorld if they could have JavaOne in the Moscone Center.


Common Themes
One of the things I look for in a conference is common themes that pervade the conference. These generally give me an idea of which products, libraries, frameworks, and technologies are most worth further investigation. Some of the things that stood out for their commonality in this edition of JavaOne 2010 were utility of Groovy (one I already had bought into), enthusiasm for Scala, the wealth of available unit testing products (Hamcrest), future of Java, and, or course, the future of JavaFX.

Groovy is In

It seems to me that Groovy has either reached or is very close to reaching that point where it is no longer new or unusual to the majority of conference attendees. Groovy, it seems, is poised to soon join the list of Java-related tools that are "taken for granted." This is a good thing because it indicates wide and general acceptance and it means less introductory sessions on the subject and more advanced sessions and more sessions that just use Groovy as a matter of course in presenting something else. It reminds me that we used to have introductory XML presentations, introductory Swing presentations, and so forth. We rarely see those today as those are assumed technologies and I believe Groovy is taking its first steps into that direction. It was cited heavily in presentations on unit testing, for example. My belief is that Groovy is becoming more like Ant in terms of familiarity: whether Java developers like or don't like Groovy, most will likely have at least passing familiarity with it in the future because of its prevalence in tools, frameworks, and presentations.

Scala May be the Next Big Thing

Scala seems to be taking Groovy's place in the list of JVM languages that may be the "next big thing" (Groovy, as I stated in the previous paragraph, seems to have arrived at "the latest big thing"). Scala has some zealous evangelists with some features to back up that enthusiasm.

Polyglot Programming

As my discussion of themes related to Groovy and Scala implies, polyglot programming was a major theme at this conference.  However, it went beyond programming. There was discussion of polyglot persistence and other areas of software development where the developer benefits from knowing and using multiple alternatives at the same time.

JavaFX Has a Future

As it has for every edition of JavaOne since 2007 JavaOne, JavaFX again was the overall dominant theme of this conference. I felt that this year was different, however, in that this year's marketing for JavaFX actually seems to be pointed in a positive direction that bodes well for a long-term future for the technology if the bold plans are realized. The plan to scrap JavaFX Script and make Java APIs available for accessing JavaFX seems obvious but bold at the same time. My only surprise is that this decision (to make JavaFX available via normal Jav APIs) was not made and announced sooner.

Several Java developers told me they could not use Flex because it's not Java. I always asked, "Is JavaFX?" It was difficult to find any measure by which JavaFX has been Java other than the first four letters in its name and the ability to run on the JVM (which a host of other languages can do as well). Although I think the latest news on JavaFX is very positive, it still needs to be implemented. Also, there are still many questions. Will JavaFX itself be part of the SDK? Will JavaFX be fully licensed the same as the SDK? Or, will JavaFX be more like Google Web Toolkit, Spring, or a host of other frameworks that play nicely with standard Java but are not themselves standard Java. The updated JavaFX Roadmap does a good job of covering features in "JavaFX 2.0," but I didn't see anything covering the licensing issues.

I think the success of the various non-standard Java libraries and frameworks show that these can succeed without being part of "standard Java" and without being JCP/JSR-based. My best current guess is that JavaFX will be delivered like these third-party products and won't be included in Java SE or Java EE. Still, I expect it to see a major upswing in adoption once developers are free to invoke JavaFX from Java, Groovy, Scala, or any other JVM-based language.

The JavaFX announcement has interesting implications related to the theme of polyglot programming mentioned earlier. On the one hand, scrapping a single non-Java language in JavaFX Script in favor of any JVM language favors polyglot programming. On the other hand, the resistance to using JavaFX Script can be interpreted as an indication that there are still many in our community not willing to learn an entirely new language to use JavaFX. One downside to the approach of requiring JavaFX Script to use JavaFX was that a developer (rightfully) could ask himself or herself, "Why learn an entirely new language just to use the newer, less mature JavaFX if I can use the more mature Flex framework by similarly learning a new language?"

Future of Java

The future of Java was also (not surprisingly) a major theme at JavaOne 2010. In general, I thought it the discussions regarding the future of Java were more positive than negative. I especially liked the implication of making Java more developer-friendly while retaining its current power. I thought I head the slightest hint from Mark Reinhold that there has even been discussion of more significant changes in later versions of Java (after Java 8) such support for generics reification. Major changes like those could mean good things for Java. Sun never had any inclination for breaking backwards compatibility; perhaps Oracle is more willing to consider it.

Reinhold also stated that they expect to have major releases more frequently than the five year time span between Java SE 6 and Java 7's likely release. It was also nice to have it confirmed that the JDK 7/JDK 8 Plan B is the currently chosen plan for the next releases of Java.

One twist on the future of Java is the issue of the next big JVM language. Stephen Colebourne presented on this in his The Next Big JVM Language presentation. Although I was unable to attend the presentation, I thought his personal conclusion from preparing this presentation was interesting: "my conclusion is that the language best placed to be the Next Big JVM Language is Java itself." Cay Horstmann has addressed this observation in his similarly titled post The Next Big JVM Language. In that post, Horstmann states that he doesn't think Java itself will be the next big language. He then looks at how Scala fits or doesn't fit that potential.

San Francisco
This was my first time to spend significant time in San Francisco. I really enjoyed the city. One of the things that I liked about JavaOne's location was the proximity to the thriving Union Square area of San Francisco.  I stayed at the Warwick San Francisco and truly enjoyed the experience. Just about everyone I met from the area (taxicab drivers, hotel staff, restaurant staff, etc.) were extremely friendly and appreciated the business that the conferences brought to the city. Tourism is San Francisco's #1 industry and I can appreciate why.

Just about every merchant, taxicab driver, and other vendor I talked to in San Francisco was aware of "the Oracle conference." Even when I was riding one of those double-decker tourist buses that lets you hop on and off as they stop at various locations in the city, the tour guide mentioned the Oracle conference and specifically called out the Appreciation Event (she called it, not inappropriately, "the Oracle party"). She observed, "It sure doesn't seem like there is a recession."

This was a great time of year to be in San Francisco. The mornings and evenings were cool and the afternoons were warm. The weather was so clear that I could see the San Francisco Bay, the Golden Gate Bridge, and the Bay Bridge any day that I was down in that area.


The Conference Sessions
The conference sessions were excellent. This is one of the few conferences I have attended where I did not regret attending any of the sessions that I did attend. I blogged on each one of them on this blog. I really liked the separation of marketing from technical sessions. This reminded me of my favorite aspect of conferences such as the Colorado Software Summit that always kept things technical. Peter Pilgrim blogged on some of the technical sessions and many attendees blogged on individual presentations they attended or presented. My own list of JavaOne 2010 presentation summaries/reviews is shown here.

Summaries and Highlights
There have been several excellent blogs posts and articles summarizing JavaOne 2010. I have collected links to a few here.

JavaOne 2010 Overall
JavaOne 2010 General Technical Session
JavaOne 2010 Opening Keynote
Mobile Technology
Mobile technology was in high use at this conference. I found my Droid to be very useful for many things during the conference. It allowed me to use its GPS to find my way around San Francisco. It also helped me to look up my Schedule Builder as needed. It was also useful for learning of sessions changes and filling out session surveys in between sessions. I used it to look up different terms via Google and Wikipedia as well. Finally, I used it to mail myself notes from some sessions when my laptop battery was nearly dead. There were mobile devices all over the place and the one drawback was the subset of individuals who chose to try to read their mobile devices while walking. The halls could get crowded at times with people heading in different directions and it didn't help to have a person wandering aimlessly and without obvious direction because he or she was too consumed with his or her mobile device.

I also was happy to have my Verizon Prepaid Mobile Broadband. It was more cost efficient to use this than to pay the hotel's daily wireless access fee (similar to how it was more cost efficient to use taxicabs than to pay to park a rental vehicle at the hotel). Although the conference's provided broadband was generally sufficient, I experienced problems with it during the "big" sessions like the JavaOne Opening Keynote and JavaOne General Technical Session. It was nice in those relatively rare events to still have access to the Internet for looking up terms and posting my blog.

Conclusion
It's my belief that JavaOne 2010 will be remembered more for being the first under Oracle stewardship, for being held outside of the Moscone Center, and for the announcements related to the future of Java and JavaFX than it will be remembered as a Googless JavaOne. To be sure, it would have been nice to have Google's employee's presentations given or, at the very least, to have had additional presentations given in those slots. But, even as it was, there were plenty of good presentations and excitement about the new directions announced for the language and platform.

Thursday, September 23, 2010

JavaOne 2010: Concurrency Grab Bag

The final session that I attended at JavaOne 2010 was the presentation "Concurrency Grab Bag: : More Gotchas, Tips, and Patterns for Practical Concurrency" by Sangjin Lee and Debashis Saha (not here today) of eBay. Despite what my schedule stated on Schedule Builder, this second instance of this session was held in Parc 55's Marketstreet rather than in Cyril Magnin II.

Lee stated that many of the concurrency problems he sees involve use of Java collections. He said this presentation is on practical issues the audience would be likely to see. He referenced a session from last year's JavaOne (Robust and Scalable Concurrent Programming: Lessons from the Trenches) and said this year's presentation starts from there and delves a little deeper into the patterns.

Lee said it is better to have correctness first and then achieve performance and scalability next. Because problems usually repeat themselves, anti-patterns serve as "crutches" (red flags) for spotting "bad smell."

Lee showed an example where a concurrency issue arose because of the use of lazy initialization. He stated that we often don't need to load these lazily. He showed how to address this with the use of the volatile keyword in situations when the "data is optional and large." Lee mentioned that use of volatile is "not zero cost," but is typically not expensive enough to worry about.

One observation Lee made was that we often have read-heavy functionality with few writers or write-heavy functionality with few readers. He outlined several implementation choices for the case of "many readers, few writers." I really liked his table summarizing "many readers, fewer writers" with type, concurrency, and staleness behavior column headers. I'd like to get a copy of the slides to use that table as a reference.

Lee also stated that the described copy-on-write approach is less useful for Maps because ConcurrentHashMap works well (albeit at the cost of large memory usage). If read performance is desired enough to justify the cost to writes, copy-in-write is great. However, Lee had some caveats for use of copy-on-write: significant write performance degradation, must avoid direct access to underlying reference, and issues of staleness. In short, what I took away from this section is that, for the case of many reads and few writes, synchronize can be used in the simplest/smallest cases, the concurrent collections will support most general cases best, and copy-on-write might be best when certain conditions exist (no applicable concurrent collection, for example).

Lee was unable to spend as much time on the case of many writers with few readers, but he did cite logging as a use case here. He pointed out that in this case, use of synchronize worsens hotly contested writes. ConcurrentHashMap is generally best again, but he also covered the Asynchronous background processor.

Lee reminded us of advice that is commonly given at these types of sessions: don't tune unless necessary. He also recommended the Highly Scalable Java project rather than rolling custom implementations for highly concurrent applications.

[UPDATE (24 September 2010)] As Sangjin states in the first comment on this post, he has made his slides available for reference at http://www.slideshare.net/sjlee0/concurrency-grab-bag-javaone-2010/download. He also states that he has been told that his slides and the recording of his presentation are accessible versus Schedule Builder. These are well worth checking out.

This presentation was recorded and so may be available online in the near future. The audience was obviously very interested in the subject because we had a packed room of individuals who came to the final session of the conference to see it. You could also see the enthusiasm in the number of questions asked during the presentation. The downside of these questions was that it forced Lee to be rushed at the end. Lee did a nice job, however, in repeating the questions and statements so that everyone could hear what was said and this should benefit the recorded version as well.

I'd really like to get my hands on these slides. They were difficult at times to see because of the red font on blue background. Turning off the front lights helped tremendously, but it was still difficult to see the bottom of the screen from the back. This is a general criticism I have of the venue. Most of the screens in these hotel conference rooms were situated such that the bottom quarter of the screens were difficult to see from past the first several rows.

I really liked how Lee (and many other) put code samples in the slides because I have found as a presenter and as an attendee that it's easier to follow code and how it relates to discussion when it's in the slides than when it's in the IDE. Besides that, having the code in the slides keeps it packaged with the slides. I'd like to get a copy of Lee's slides because of the good reference information in them and because of the code samples I'd like to take a closer look at.

JavaOne 2010: A Brief Introduction to Scala

Steven Reynolds (a "software developer and manager" who works at INT) presented "A Brief Introduction to Scala" at JavaOne 2010. [As a side note, JavaFX Script is a casualty announced at JavaOne 2010, but JavaFX's use of SceneGraph seems stronger than ever and Reynolds has a presentation on that.] Reynolds asked who had heard of Scala and nearly everyone in the near-capacity room raised their hand.  However, when he followed up with who actually uses Scala, I could count the number of raised hands on the fingers of one of my hands.

Reynolds described Scala as "new-ish programming language for the JVM" that features a static type system and support for functional programming. In answering his own question in a slide titled, "Why Does Functional Programming Matter?" Reynolds (with tongue firmly in cheek) showed a direct correlation between Functional Programming and Google's $150 billion worth (MapReduce is the connection).

Scala's design goals include combining functional programming with object-oriented programming. Scala is also designed to be practical and interoperable with Java. The ability to call from Java to Scala and from Scala to Java allows Scala access to all SDK and other Java libraries. Scala designed to be powerful language that "trusts the programmer" and has a "powerful static type system that's easy to use." Reynold is "mystified" by people referring to Scala as a scripting language because of its powerful static typing. Frankly, I too think more of a dynamic language like Groovy for scripting than I do a static language like Java or Scala.

Reynolds provided a brief overview of characteristics of functional programming. In functional programming, functions are first class citizens. Functional programming has extreme immutability. Scala, because it's a "blended" language, does support mutability. The advantages of functional programming is that "what was once true is always true." In addition, "reasoning and testing are simpler." Reynolds also stated that Scala is "nice for concurrency and distributed systems." Reynolds' listed disadvantages of Scala were that "modular programming is sometimes harder" and there are "sometimes performance issues." Reynolds explained that these performance issues are sometimes attributable to the need to copy objects for immutability support.

Renyolds recommends the book Structure and Interpretation of Computer Programs. He stated, however, that you need to know Scheme to read this book.

Scala "gently guides you to use immutable code," but does support mutability. Reynolds talked about the difference between the Scala keywords val and var (val is for "unchanging value" and "var" is for "varying/variable value") when designating variables.

In Scala, every statement has a value. Reynolds contrasted this to Java where, for example, "if" statements don't really have values. Scala supports type referencing similar to Groovy's. I thought it was helpful to see the chart with a picture of a subset of Scala's type system. This visually made it clear that the Scala String and Double are not the same as equivalently named types in Java. It is interesting that the integer type in Scala is Int (capitalized like the Java reference type Integer, but with the same letters as the primitive 'int' type). Reynolds emphasized that the Scala types are more fully interconnected with each other in a lattice than are Java types (primitives are off on their own).

Reynolds showed an example from Scala's Predef that is available anywhere in Scala. He also talked about Scala's handy tuples. Reynolds's example created a tuple with simple parentheses-based syntax.

C++ supports multiple implementation inheritance (which is well known for the diamond problem) and Java intentionally only has single implementation inheritance. Scala goes in between: it is object-oriented with inheritance and objects and has single implementation inheritance with mixins (Traits).Scala's "with" keyword allows specification of the mixin/Trait.

Reynolds described how Scala-specific features like Traits can work when Scala is compiled to Java byte code. Scala compiles to .class files and be placed into a JAR just as in Java. Then, Reynolds suggested, use an IDE to open that JAR in a new project. He showed this with NetBeans 6.9. This gave insight into what this looks like "in a pure Java sense." Although Reynolds called it "under the hood and low level," I do like to use tactics like this to better understand "the magic." Reynolds also used Eclipse to see the byte code of this Scala-based JAR.

The Scala compiler (scalac) compiles Scala code into Java bytecode. Another good tip Reynolds provided is to use the scalac -Xprint:typer option to see what is generated. For someone with some Java experience and thinking about using Scala, these kinds of ideas (using the IDE to see Java equivalent or using the -Xprint:typer option with scalac) can help increase the comfort level in first using Scala.

Reynolds showed Scala's highly flexible case statement and I found it interesting that underscore (_) represents default case. It appears to me that, like Groovy, Scala's case needs to be carefully used because multiple options could "match" the condition (order does matter!).

Reynolds introduced Scala's well-known Actors and talked about how they help avoid shared mutable state. Messages are sent asynchronously. Reynolds briefly summarized inversion of control and stated that Scala had a design goal with Actors to enable event programming without inversion of control. This led to his explanation that react does not return. Benefits of Actors include no need to worry about "safe publication" and availability of explicit concurrency. Reynolds stated that even this nice approach to concurrency is not perfect.

I enjoyed Reynolds's presentation. It was exactly what I was looking for in an initial Scala overview. My only complaint was that this packed room got pretty warm in the afternoon. I normally don't have a lot of patience for that, but Reynolds's presentation was good enough to keep me there despite the uncomfortable temperature.

During the question and answer section, an attendee asked if JUnit could be used with Scala. Reynolds confirmed that JUnit can be used with Scala and referenced other Scala-specific testing framework ScalaTest as well. Another attendee asked about tooling for Scala. Reynolds acknowledged that Scala tooling has room for improvement. He stated that Scala is expected to have built-in features added that will help tooling for Scala.

Scala seems to be all the rage at this year's JavaOne. I appreciated Reynolds acknowledging that Scala might actually have a weakness or two. In Andres Almiray's presentation yesterday, he made an interesting comment during the question and answer section in which he sort of summarized on-the-fly that Scala may not be as strong as competitors in some areas (such as Groovy in metaprogramming or Clojure in concurrency), but that Scala does many things very well. If one is looking for a "general" language to cover broad needs, that's the kind of description you'd want.

Scala seems to be one of those things which has enthusiastic evangelists running around telling everyone how great it is without admitting many or significant drawbacks. I'm always leery of such one-sided things: they rarely (read never) are as flawless as advertised. However, I think Scala could be like Ruby was for me: it cannot possibly live up to the uber hype, but it really is nice when you get past the hype and look at it realistically. I try to not let unabated enthusiasm from well-meaning supporters distract me from rather a technology is useful to learn or not. I looked past this with Ruby and liked what I found and I could see the same happening for Scala. One of the types of sessions I like to attend at a conference are those that, within an hour or so, can help me decide if a particular subject is worth further investigation. This session did that for me: I saw enough here to believe that Scala may be worth some time investment.

JavaOne 2010: Performance Tuning from the Pros

John Duimovich ("special guest") presented Trent Gray-Donald's  (IBM Java) "Performance Tuning from the Pros" presentation to a well-beyond-standing-room-only audience at JavaOne 2010. According to Duimovic, Trent keeps tweaking this presentation between times that he gives it.

Garbage collection tuning is part of performance tuning, but it's not all there is to performance tuning. The slide "Avoiding panic" had an important point: don't worry about slow-performing features that no one uses. The idea here is to "frame the problem" first. This helps to identify appropriate approach to meet those needs and desires. As part of framing the problem, it's also important to identify what we are allowed to change. "It's just too slow" doesn't give any information as to what is needed, so there's no good way to get there.

Another good point is to "don't just 'try things.'" Duimovich said that using what worked from last time again simply because it worked before is like taking a prescription from a previous illness when sick again.

A "key insight" cited in this presentation is one I've written about before: "Gathering performance data affects the results." Several attendees (including me) could relate when Duimovich asked how many of us have added logging to help diagnose performance problems only to find out that excessive logging is the problem.

Duimovich stated that we need to understand the constraints that impact performance and he outlined the common ones. He then laid out how to apply "Scientific method" to tune performance. This includes changing one thing at a time, keeping a log of changes, using consistent measurements, and recording in detail the results of what was attempt. In short, you need "good discipline" when tuning performance.

Duimovich moved onto some specific approaches for performance tuning. He stated that the time for tuning with tooling is throughout the development cycle waiting until the end when panic mode will inevitably set in.

At this point, Duimovich went through some example performance tuning steps against some code obviously written to demonstrate performance problems that could be resolved, but also intended to make it easy to guess (wrongly) about what was causing the performance issue. He showed how using the tools to try changes one-at-a-time could help identify the real problems, which were not always what was first expected from looking at the code.

Duimovich used several tools including Health Center and Eclipse Memory Analyzer. Although you shouldn't try to just guess what's wrong, Duimovich did say that very often its heap abuse and Eclipse Memory Analyzer can point this out. Eclipse Memory Analyzer can be used with any JVM. He said he often finds collections on the heap with one or two elements.

Duimovich outlined some rules for "modern JVMs."  The first was to "write readable code." The second was to "follow JVM strengths" (use JDK libraries and don't inline) and the third was to "leverage multi-core if appropriate." Generally use class libraries, but in some cases measurement and analysis may show that a custom solution is necessary.

Things not to do in performance tuning include trying to use "hail mary" magic JVM command-line options and trying to guess at what's causing the performance problems.

This was the third session that I attended at which a substitute speaker presented the prepared presentation. As with the first two, this substitute speaker did an admirable job.

JavaOne 2010: The Garbage Collection Mythbusters

As members of the Garbage Collector Group of the HotSpot Virtual Machine Development Team, John Coomes and Tony Printezis have the credentials for presenting "The Garbage Collection Mythbusters" at JavaOne 2010. The speakers took turns talking and they started by Coomes stating they wanted to "cover the strengths and weaknesses of Java garbage collection or at least the perceived strengths and weaknesses of Java garbage collection." They then proceeded to provide a brief background ("refresher course") into the basics of garbage collection.

Tracing-based garbage collectors are considered passive and "discover the live objects" and reclaim those that aren't live.


Myth #1: Malloc/free always performs better than garbage collection

Garbage collection enables object relocation which in turn provides many benefits (eliminated fragmentation, decrease garbage collection overhead, and supports linear allocation). Other benefits of object relocation include compaction (improves page locality) and relocation ordering (improve cache locality).

They also stated that "Generational Garbage Collection is Fast!" They cited a "recent publication" that malloc/free outperforms GC when space if tight, but GC can match or even beat malloc/free when there is "room to breathe."


Myth #2: Reference counting would solve all my Garbage Collection problems

Traditional reference counting has extra space overhead and extra time overhead.  It is also non-moving and is not always incremental or prompt. Lastly, this approach cannot deal with cyclic garbage collection. Advanced reference counting deals with some of the limitations of traditional reference counting. Two-bit reference counts can help with extra space overhead problem. It is also common to combine reference counting with copying GC. Must use a backup GC algorithm to deal with cyclic garbage. There is complexity in having two garbage collectors involved and still non-moving. A convincing argument for busting this myth is that Coomes is not aware of any modern garbage collection mechanism that uses reference counting.


Myth #3: Garbage Collection with explicit deallocation would drastically  improve performance

Printezis's first argument against this myth is the philosophical issue of increased change of compromised safety. He also made more "practical" arguments against the ability to explicitly deallocate. One thing that made particular sense to me was the concept that garbage collectors tend to "reclaim objects in bulk," so having to deal with explicit single deallocation cases could actually impact overall performance negatively.


Myth #4: Finalizers can (and should) be called as soon as objects become unreachable

Hopefully, most Java developers today know this important fact highlighted in this presentation: "Finalizers are not like C++ destructors." They have no guarantees. If you want "prompt external resource reclamation," then dispose your resources explicitly.

In conjunction with discussion on Myth #4, they referenced Reference Objects (WeakReferences, SoftReferences, etc.). They referred the audience to the "Garbage Collection-Friendly Programming" presentation they gave at 2007 JavaOne conference for more details.


Myth #5: Garbage Collection eliminates all memory leaks

Printezis stated he wished this one was true. Sadly, it's not. They provided a slide with a code sample showing "Unused Reachable Objects." Their example's simple ImageMap class had a static reference to itself. They showed that the garbage collector could never reclaim the File added to the internal map. Although the garbage collector reclaims unreachable objects, it will not reclaim unused reachable objects. These memory leaks require effort to track down and some tooling is getting better to help track them down.


Myth #6: I can get a garbage collector delivers that both very high throughput and very low latency

The speakers discussed throughput versus latency. Throughput garbage collectors try to shift most of the work to GC pauses to improve throughput for the application threads. The result is the least overall garbage collection overhead at the cost of garbage collection pauses. Latency garbage collectors move work out of the garbage collector pauses, putting more work on the application threads. The result is greater garbage collector overhead for the benefit of smaller pauses. As the above makes clear, their goals are conflicting. The bullet said it all: "One GC does not rule them all." Instead, "must choose the best GC for the job." In the future, hints might be helpful, but it will always be up to a human to decide based on the particular need.


Myth #7: I need to disable GC in critical sections of my code

Disabling the garbage collector often means not being able to allocate either because heap is full or nearly full. This might impact other threads as well. A possible solution to that conundrum is to allocate in advance, but that requires knowing exactly what data is necessary. Many Java libraries freely allocate objects and it seems unlikely that Java developers can ensure that these are avoided in the critical sections in which garbage collection is turned off. This approach has high potential for deadlocks, exceptions, and other unintended side effects. This approach might work in a "few, limited cases," but is "not a general-purpose solution" because it provides "too many ways to shoot yourself in the foot."


Myth #8: GC saves development time and doesn't cost anything

No need for "reclamation design" and fewer bugs, but the costs come out at deployment. Defaults typically work for "modest application requirements," but "stringent application requirements" require choice to be made regarding issues. Applications get very little control over garbage collection (when, how much, how long).

Speakers agreed that part of this myth is is true ("saves development time"), but they busted the myth because it does cost effort.


Myth #9: GC settings that worked for my last application will also work for my next application

The speaker began busting this myth by explaining how many different factors affect garbage collection performance. If somehow you could keep all of these factors exactly the same across environments and applications, then the same settings would likely work. There's not much chance of that happening, of course. "Transferring parameters" from old application to new application has "mixed results at best." If applications are very similar, might consider using older application's parameters as a starting point only, but plan to spend time and effort to tune those parameters.


Myth #10: Anything I can write in a system with GC, I can write with alloc/free

Technically, this is not a myth: one can write anything written with garbage collector with alloc/free because the garbage collector uses that approach in its implementation. However, it is much more difficult to do this directly with alloc/free. To turn it into a myth, they added the words "just as easily as": "Anything I can write in a system with garbage collection, I can write just as easily with alloc/free."


Conclusion

The speakers used a style that was a nice fit for this type of presentation. They took turns being the advocate for busting a particular myth and the other speaker would then be the judge to consistently conclude that the point was proven and the myth is debunked. This format allowed them to pretend to ask tough questions and be talked out of it. There was some acting involved that won't win any Oscars, but it did fit the format nicely and kept the presentation engaging. They also used humor at the end.  Myth #11 was "This talk is over" and that myth was Confirmed rather than Busted. Most of the over-capacity audience stayed through the entire presentation.

Wednesday, September 22, 2010

JavaOne 2010: JavaFX Graphics

In late July, I publicly wondered how much time I should spend on JavaFX at JavaOne 2010. JavaFX was the big announcement at 2007 JavaOne and dominated 2008 JavaOne and had numerous sessions in 2009 JavaOne, but it wasn't clear to me that JavaFX's future was worth missing other subjects to attend sessions on JavaFX. I knew the other subjects would be viable in the future, but wasn't so sure about JavaFX. The JavaOne 2010 Opening Keynote changed all that. Not only did they talk big game about JavaFX's future (which has been the refrain from the past three JavaOne conferences), but as (or more) importantly, they described their vision for it in a way that, for the first time in a long time, I think gives it a reasonable opportunity for long-term success and widespread adoption. With that in mind, it is not surprising that Jim Graham's and Kevin Rushforth's presentation "JavaFX Graphics" was fairly well attended.

JavaFX uses the Scene Graph Model for its graphics support. The scene graph consists of a hierarchy of group and leaf nodes. It makes media directly accessible and easy to add to applications with minimal effort.

The speakers stated that some JavaFX capabilities that would get less attention from them in this presentation are SVG paths, Media (sound and video), UI controls, Input, 3D cameras, and charts.

Prism is a new graphics stack, but is at implementation level (implements SceneGraph at rendering level), but is invisible to the developer. Its main reason for use is performance, but it does provide 3D. Prism will be default renderer in the next release, Although it is expected that most machines will have graphic cards to support Prism by the time it is released, software support and Swing Toolkit will be retained for those machines that don't have graphics cards supporting Prism. This release is expected sometime in 2011. The JavaFX developer sees JavaFX UI Controls and JavaFX Scene Graph API, but does not need to deal with Swing Toolkit, Java 2D, or other implementaton-specific APIs.

The presenters showed numerous effects examples with source code. Not surprisingly, most of their slides were written using JavaFX Script because they were created before the official announcement of JavaFX Script's deprecation. Also, the presenters stated that JavaFX Script uses fewer lines of code, so it was better suited for fitting onto slides. Finally, there are some features JavaFX Script provides that still need to be incorporated into Java as convenience methods.

Some of the main topics covered in this presentation were applying effects and animation. The speakers covered adding 3D perspective transforms. We won't see all 3D transforms in the next release of JavaFX, but the rest should come in the next version after that.

The audience seemed genuinely interested in JavaFX. Questions were repeatedly asked and answered regarding whether these examples demonstrated will be doable in standard Java. Repeatedly the answer was that JavaFX Script is deprecated and these functions will be available via standard Java. I believe this gives JavaFX the best chance for long-term success and widespread adoption, especially if other improvements to the Java language make it less verbose. If all Java developers can easily access JavaFX APIs and the books, articles, and blogs all indicate how to use these APIs without specific JavaFX Script, its widespread adoption should be much greater. I think the vision is there. The question, as in the past three JavaOne conferences, is how quickly it can be delivered.

JavaOne 2010: Polyglot Programming in the JVM

I was impressed with how Andres Almiray's performed as a substitute speaker for Hamlet D'Arcy's JavaOne 2010 presentation "Code Generation on the JVM" and looked forward to his presentation of his own talk: "Polyglot Programming in the JVM: Or How I Learned to Stop Worrying and Love the JVM". This session was my third session of the day in the same hotel (Parc 55)!

Almiray began by outlining his credentials and providing "some facts about Java" (see Java History 101: Once Upon an Oak). Almiray stated that while generics can be good, he doesn't love them. Almiray explained that Java's language syntax was designed to be attractive to developers in C/C++, which were the dominant languages of the day. This led to Java having more verbosity than it needs today.

After outlining some of Java's history that have made it more challenging to use, Almiray transitioned into talking about using JVM languages other than Java to enjoy the benefits of JVM "without the pain of Java."

The non-Java JVM languages tend to be more concise (less verbose) than Java. Almiray showed the common JavaBean example written in Java and said we have come to rely on IDEs because of the pain involved with writing these in Java. Almiray showed code for Groovy, Clojure, and Scala versions of the same JavaBean that had been written in Java originally.

Almiray moved from standard JavaBeans classes to coverage of closures. He pointed out that Java does not have closures and that Java developers commonly use anonymous inner classes to do the same thing. Java is supposed to get Lambda in a future version (JDK 8). Almiray the same proposed Java Lambda code with Groovy closures, Clojure closures, and Scala closures. In demonstrating these examples, Almiray reminded the audience that Groovy supports operator overloading and that Scala uses the val keyword to indicate a variable's value cannot be changed.

Almiray's next topic was "enhanced switch" which we might get in Java 7. He showed Groovy code switching on a range. In these examples, Almiray asked who likes to use regular expressions in Java.  There were no raised hands. One of Almiray's examples showed use of Groovy's ~ operator for regular expressions for a case to evaluate. Almiray went on to demonstrate the code for doing this with Scala and Clojure.

Almiray talked about one of my favorite features of Groovy: integration with Java without JSR 223. He stated that the same seamless two-way integration with Java is true of Clojure and Scala as well. Other things that these three alternative JVM languages have in common are operator overloading support, regular expressions "as first class citizens", closures, everything as an object, and native syntax for collection classes.

On operator overloading, Almiray thinks that Scala has "gone over the top" because it allows developers to write any method they want to override any operator they want (he used ==== as an example). He contrasted this with Groovy which has a much smaller set.

Almiray went on to discuss each of the three alternative JVM languages advertised in his abstract on an individual basis. He stated that Groovy's metaprogramming capability at buildtime and runtime attracted him to Groovy. He also likes the healthy Groovy ecosystem: Grails, Griffon, Gant (Groovy version of Ant), Gradle, Spock, Gaelyk (Google App Engine), Gpars (Groovy Parallel Systems), CodeNarc (code inspection), etc. Groovy was "born in 2003" and is about to celebrate its seventh birthday next month.

Scala's syntax is different than Java's/Groovy's, but Scalar has a richer type system. Scala will not allow you to "put a square peg in a round hole." Scala uses type inference and can use implicits to provide hints as to the type. Scala's actor model deals with concurrency and threads. Almiray said that the Actor Model is not a silver bullet for concurrency, but is much better. Almiray also talked about Scala's Traits, which support mixing in different behaviors.

Clojure, like all LISP variants, treats "data as code and vice versa." Clojure provides immutable structures and Software Transactional Memory (STM). STM is also making its way into Scala. If STM is added to Java, it would be readily available in Groovy.

Almiray demonstrated a small example of all three of these alternative JVM languages working together. The name of the demonstrated software was appropriately named Babel. Almiray mixed this demonstration with views of the source code in the IDE. One thing I would not have realized is that the Scala Double is NOT the Java Double.

Almiray identified "other places you might find polyglot development."  Web development features many languages/formats: XML, SQL, JavaScript, JSON, CSS, Flash/Flex/ActionScript. We're starting to see "Next-Gen Datastores (NoSQL)" (polyglot persistence): FleetDB in Clojure, FlockDB in Scala, CouchDB and Riak in Erlang. Almiray stated that some people are starting to say that NoSQL is no longer "No SQL," but is instead becoming "Not Just SQL." Polyglot build systems include Gradle/Gant for Groovy, Rake for Ruby/JRuby, and Maven 3 for XML/Groovy/Ruby.

Almiray believes that Java may have reached its maturity in terms of features. Clojure is "making a lot of noise" after being only three years old and Scala and Groovy were started in the 2000s. Although polyglot programming has received more (and more favorable) press recently, it's been around for quite some time. Almiray also referenced Stuart Halloway's Java.next blog.

JavaOne 2010: JUnit Kung Fu: Getting More Out of Your Unit Tests

My Wednesday at JavaOne 2010 began with John Ferguson Smart's standing-room-only presentation "JUnit Kung Fu: Getting More Out of Your Unit Tests." Most of the overcapacity audience responded in the affirmative when Smart asked who uses JUnit. While the majority of the audience uses JUnit 4, some do use JUnit 3. Only a small number of individuals raised their hands when asked who uses Test-Driven Development (TDD).

Smart stated that appropriate naming of tests is a significant tool in getting the most out of JUnit-based unit tests. He mentioned that JUnit 4 enables this by allowing for annotations to specify the types of tests rather than having to use the same method name conventions that earlier versions of JUnit requires. In the slide, "What's in a name," Smart pointed out that naming tests appropriately helps express the behavior of the application being tested. Smart likes to say he doesn't write tests for his classes. Instead, classes get tested "as a side effect" for his testing of desired behaviors. Smart recommended that you don't test "how it happens," but test "what it does." If your implementation changes, your test doesn't necessarily need to change because you're only worried about outcomes and not how the outcomes are achieved. Smart talked about how appropriately named tests are more readable for people new to the tests and also provide the benefit of helping test the appropriate things (behaviors).

Smart outlined many naming tips in his slide "What's in a name" (only a subset it listed here):
  1. Don't use the word "test" in your tests (use "should" instead)
  2. Write your tests consistently
  3. Consider tests as production code
For Unit Test Naming Tip #1, Smart stated that "should" is very common in Behavior-Driven Development (BDD) circles. Test methods should be named to provide a context. They should provide the behavior being tested and the expected outcome. I liked this tip because I find myself naming my test methods similarly, but I have always started with the word "test," followed by the method name being tested, followed by the expected behavior and outcome. Smart's recommendations reaffirm some of the things I have found through experience, but I think provide a better articulation of how to do this in a more efficient way than I've been doing.

Smart stated that tests should be written consistently. He showed two choices: "Given-When-Then" or "Arrange-Act-Assert." Smart said that he uses the classic TDD approach of writing to the inputs and outputs first and then writing the implementation.

Smart's bullet "Tests are deliverables too - respect them as such" summarized his discussion of the importance of refactoring tests just as production code is refactored. Similarly, he stated that they should be as clean and readable as the production code. One of the common difficulties associated with unit tests is keeping them maintained and consistent with production code. Smart pointed out that if we treat the unit tests like production code, this won't be seen as a negative. Further, if tests are maintained as part of production maintenance, they don't get to a sad state of disrepair. In the questions and answers portion, Smart further recommended that unit tests be reviewed in code reviews alongside the code being reviewed.

Smart spent over 20 minutes of the 60 minute presentation on test naming conventions. He pointed out at the end of that section that if there was only one thing he wanted us to get out of this presentation, it was the important of unit testing naming conventions. I appreciated the fact that his actions (devoting 1/3 of the presentation to naming conventions for unit tests) reaffirmed his words (the one thing that we should take away).

Smart transitioned from unit test naming conventions to covering the expressiveness and readability that Hamcrest brings to JUnit-based unit testing. Smart pointed out a common weakness of JUnit related to exceptions and understanding what went wrong. Hamcrest expresses why it broke much more clearly. Smart covered "home-made Hamcrest matchers" (custom Hamcrest matchers) and described creating these in "three easy steps." Neal Ford also mentioned Hamcrest in his JavaOne 2010 presentation Unit Testing That's Not So Bad: Small Things that Make a Big Difference.

Only a few people in the audience indicated that they use parameterized tests. Smart talked about how parameterized tests are useful for data-driven tests. JUnit 4.8.1 support for parameterized tests was demonstrated. JUnit creates as many instances of the class as there are rows in the associated database table. A set of results is generated that can be analyzed. Smart also talked about using Apache POI to read in data from Excel spreadsheet to use with parameterized testing. Smart referred the audience members to his blog post Data-driven Tests with JUnit and Excel (JavaLobby version) for further details.

Smart demonstrated using parameterized tests in web application testing using Selenium 2. The purpose of this demonstration was to show that parameterized tests are not limited solely to numeric calculations.

Smart next covered JUnit Rules. He specifically discussed TemporaryFolder Rule, ErrorCollector Rule, Timeout Rule, Verifier Rule, and Watchman Rule. The post JUnit 4.7 Per-Test Rules also provides useful coverage of these rules.

Smart believes that recently added JUnit Categories will be production-ready once adequate tooling is available. You currently have to run JUnit Categories using JUnit test suites (the other work-around involves "mucking around with the classpath"). Smart's Grouping Tests Using JUnit Categories talks about JUnit Categories in significantly more detail.

Parallel tests can lead to faster running of tests, especially when multiple CPUs are available (common today). Smart showed a slide that indicated how to set up parallel tests in JUnit with Maven. This requires JUnit 4.8.1 and Surefire 2.5 (Maven).

Smart recommended that those not using a mocking framework should start using a mocking framework to make unit testing easier. He suggested that those using a mocking framework other than Mockito might look at Mockito for making their testing even easier. He stated that Mockito's mocking functionality is achieved with very little code or formality. The JUnit page on Mockito has this to say about Mockito:
Java mocking is dominated by expect-run-verify libraries like EasyMock or jMock. Mockito offers simpler and more intuitive approach: you ask questions about interactions after execution. Using mockito, you can verify what you want. Using expect-run-verify libraries you often look after irrelevant interactions.
Mockito has similar syntax to EasyMock, therefore you can refactor safely. Mockito doesn't understand the notion of 'expectation'. There is only stubbing or verifications.
Like Neal Ford, Smart mentioned Infinitest. He said it used to be open source, but is now a combination of commercial/open source. The beauty of this product is that "whenever you save your file changes [your production source code], [the applicable] unit tests will be rerun."

Smart stated something I've often believed is a common weakness in unit testing. He referred to this as a "unit test trap": our tests often do test and pass what we coded (but not necessarily the behavior we wanted). Because the coder knows what he or she coded, it is not surprising that his or her tests validate that what he or she wrote as they intended.

Regarding code coverage tools, Smart stated that these are useful but should not be solely relied upon. He pointed out that these tools show what is covered, but what we really care about is what is not test covered. My interpretation of his position here is that code coverage tools are useful to make sure that a high level of test coverage is being performed, but then further analysis needs to start from there. Developers cannot simply get overconfident because they have a high level of test coverage.

Smart stated in his presentation and then reaffirmed in the questions and answers portion that private methods should not be unit tested. His position is that if a developer is uncomfortable with them enough to unit test them individually, that developer should consider refactoring them into their own testable class. For me, this fits with his overall philosophy of testing behaviors rather than testing "how." See the StackOverflow thread What's the best way of unit testing private methods? for interesting discussion that basically summarizes common arguments for and against testing private methods directly.

Smart had significant substance to cover and ran out of time (Smart quipped that we had "approximately minus 20 seconds for questions"). This is my kind of presentation! In many ways, it was like trying to drink from a fire hose, but I loved it! There are numerous ideas and frameworks he mentioned that I plan to go spend quality time investigating further. I'm especially interested in the things that both he and Neal Ford talked about.

DISCLAIMER: As with all my reviews of JavaOne 2010 sessions, this post is clearly my interpretation of what I thought was said (or what I thought I heard). Any errors or misstatements are likely mine and not the speaker's and I recommend assuming that until proven otherwise. If anyone is aware of a misquote or misstatement in this or any of my JavaOne 2010 sessions reviews, please let me know so that I can fix it.

JavaOne 2010: Visualizing the Java Concurrent API

One of my goals at JavaOne 2010 was to attend some sessions on Java concurrency. I had not been aware of the Java Concurrent Animated project until reading the abstract for this presentation:
This presentation will consists of a series of animations that visualize the functionality of each Java concurrent component. Each animation features buttons that correspond to the methods in that component. Each click of a button creates a thread to calling a method, showing how the threads interact in real time. Each animation is controlled by the actual Java concurrent component it is illustrating, so the animation is not only a visual demonstration, it's also a code example. If you?re still using constructs like Thread.start or wait/notify, you'll want to attend this meeting. The presentation is packaged as a self-executable Java Archive (JAR) file and is available for download. It is a valuable reference.
The abstract looked outstanding, but in the back of my mind I wondered if it was worth going to a presentation on this if I could simply run the advertised self-executable JAR file. It turns out that I was glad that I did attend because there was significant discussion regarding what was graphically being displayed.This was another standing-room-only presentation.

The speakers began the presentation with a brief history of concurrency support in Java. They have worked on this application to serve as an graphical illustrative reference for how the various concurrent structures work.

The specific concurrency structures first discussed were Executors (java.util.concurrent.Executor), which provide the access to thread pools. The "bouncer class of Java" is the Semaphore (java.util.Semaphore). Whereas a lock is reentrant, a Semaphore is not reentrant. It is possible for a Semaphore to be released by a Thread other than the Thread that started the Semaphor.

ReetrantLock (java.util.concurrent.locks.ReentrantLock) is a replacement for synchronized that allows same Thread to reenter. This makes it possible to avoid potential deadlock problem associated with synchronized. The cost is the need to manually unlock it from the same Thread that locked it.

Condition (java.util.concurrent.locks.Condition) is directly related to the Lock (same package!). The Javadoc notation for Condition summarizes its purpose nicely:
Condition factors out the Object monitor methods (wait, notify and notifyAll) into distinct objects to give the effect of having multiple wait-sets per object, by combining them with the use of arbitraryLock implementations. Where a Lock replaces the use of synchronized methods and statements, a Condition replaces the use of the Object monitor methods.
Even this Condition still has the possibility of "spurious wakeups."

The ReentrantReadWriteLock (java.util.concurrency.locks.ReentrantReadWriteLock) policy has changed from J2SE 5 to Java SE 6.

CyclicBarrier (java.util.concurrent.CyclicBarrier) and CountDownLatch (java.util.concurrent.CountDownLatch) are familiar concepts for those familiar with working with hardware. The Javadoc describes the CyclicBarrier: "A synchronization aid that allows a set of threads to all wait for each other to reach a common barrier point." It also describes an effect described in the presentation: "The CyclicBarrier uses an all-or-none breakage model for failed synchronization attempts: If a thread leaves a barrier point prematurely because of interruption, failure, or timeout, all other threads waiting at that barrier point will also leave abnormally via BrokenBarrierException (or InterruptedException if they too were interrupted at about the same time)."  The Javadoc also provides a nice summary of CountdownLock:
A synchronization aid that allows one or more threads to wait until a set of operations being performed in other threads completes.
A CountDownLatch is initialized with a given count. The await methods block until the current count reaches zero due to invocations of the countDown() method, after which all waiting threads are released and any subsequent invocations of await return immediately. This is a one-shot phenomenon -- the count cannot be reset. If you need a version that resets the count, consider using aCyclicBarrier.
AtomicInteger (java.util.concurrent.atomic.AtomicInteger) is representative of thread-safe access on single variables.  See java.util.concurrent.atomic package description for further details.

The BlockingQueue (java.util.concurrent.BlockingQueue) and Future (java.util.concurrent.Future) [often returned by Executors] were also discussed and demonstrated. Callable is similar to Runnable but returns generic object instead of void.

The java.util.concurrent approach also introduced the ability to set time units explicitly rather than specifying everything in number of milliseconds. This is available via the TimeUnit enum.

Download Java Concurrent Animated at http://sourceforge.net/projects/javaconcurrenta/. Because this executable uses the very concurrent structures it is illustrating, there is no "artificial" rendering of what's happening. This means that even strange or unexpected behaviors can be seen.

On a pretty much unimportant and unrelated note, I was in the same room for two consecutive sessions for the first (and likely only) time during this conference. Both this presentation and the presentation I attended before it (JUnit Kung Fu: Getting More Out of Your Unit Tests) were held in Parc 55's Embarcadero.