28 Jun 2008
I must say the Eclipse Memory Analyzer looks pretty slick. There is some pretty good material over on the developers blog. Lastly, there was a talk on it at JavaOne 2008 titled ‘Automated Heap Dump Analysis for Developers, Testers, and Support Employees‘ (multimedia recording).
The Eclipse Memory Analyzer is a fast and feature-rich Java heap analyzer that helps you find memory leaks and reduce memory consumption.
The Memory Analyzer was developed to analyze productive heap dumps with hundreds of millions of objects. Once the heap dump is parsed, you can re-open it instantly, immediately get the retained size of single objects and quickly approximate the retained size of a set of objects. The reference chain to the Garbage Collection Roots then details why the object is not garbage collected.
Using these features, a report automatically extracts leak suspects. It includes details about the objects accumulated, the path to the GC Roots, plus general information like system properties.
The tool was actually contributed by SAP and is currently an incubation project over at Eclipse. It’s available as both an eclipse feature and a standalone eclipse RCP application.
Kudos to the team for taking the time to provide a standalone package!
24 Jun 2008
For the past couple of days I’ve been attending the caBIG Annual Meeting (it’s the 5th such meeting and by all accounts the most well attended).
About caBIG
caBIG™ stands for the cancer Biomedical Informatics Grid™. caBIG™ is an information network enabling all constituencies in the cancer community – researchers, physicians, and patients – to share data and knowledge. The components of caBIG™ are widely applicable beyond cancer as well.
The mission of caBIG™ is to develop a truly collaborative information network that accelerates the discovery of new approaches for the detection, diagnosis, treatment, and prevention of cancer, ultimately improving patient outcomes.
In a nutshell, caBIG is an initiative of the National Cancer Institute built heavily upon open-source components that aims to motivate and facilitate the sharing of data. It strongly suggests a front-loaded UML workflow (using MDA) and incorporates certain aspects of ontologies and common data definitions to help guarantee consistent semantics (and syntax).
From a purely technical perspective, I’ve never been sold on the idea of MDA. I’ve had experience with both open-source and commercial modeling tools that have never fulfilled on the promise of true round-tripping (and if you don’t have round-tripping… well, you’re in for a world of hurt). Now in the caBIG case, there is a pipeline of transformations that you’re more or less required to run through…
- Create a UML representation of your object and domain models
- Annotate the model with caBIG specific annotations and stereotypes
- Run the annotated model through the Semantic Integration Workbench (another caBIG tool)
- Submit the final model (XMI) to caBIG for approval and insertion into the caDSR (cancer data standards repository)
Make a change in the future and you’re more or less required to run through steps #1-4 again.
Once you have a validated UML model, you can then run through the caCORE SDK and generate skeleton code for a 3-tiered application consisting of (at a high level) a Hibernate data model, an external API and some middleware code to glue the API and data model together. Round-tripping is essentially non-existent from what I’ve heard and seen.
You’re done! Congratulations on achieving Silver-level compliance.
…
wait a second.
It’s a little bit too process heavy for my liking. I would have liked to see the NIH/caBIG be first and foremost focused on data interoperability and less on tools, particularly those that dictate particular workflows (like UML -> annotation -> MDA -> Code Generation).
I’d much rather see an extensible API with pluggable end-points, a meta data registration service and a suite of validation test cases. Define suitable goals and keep it simple. Provide suitable incentives and vendors *will** support it*.
There’s more than one way to provide interoperability.
As a developer working on existing products that are considering support for caBIG, the requirement to fundamentally change my development process is a bit unnerving. Speaking generally, there’s no guarantee that everyone has UML models for their systems and even if they did, attempting to do full MDA transformations on them would be fairly ambitious.
That’s it for now. It’s been an interesting conference and I’ve learned a lot about the various initiatives and their progress. Off to visit customers in Cincinnati tomorrow!
18 Jun 2008
We recently finished migrating our product from Java5 to Java6. The software migration itself went quite smoothly with only a couple unanticipated problems.
However we do have a number of developers on MacBook Pro’s (myself included) that began having problems with other Java-based applications after making Java6 their default JVM.
One such problem was with the popular Spark IM client. After upgrading to Java6 we started getting the following exception:
macos:/Applications/Spark.app/Contents/MacOS ajordens$ ./JavaApplicationStub
NSRuntime.loadLibrary(/usr/lib/java/libObjCJava.dylib) error.
java.lang.UnsatisfiedLinkError: /usr/lib/java/libObjCJava.A.dylib:
at java.lang.ClassLoader$NativeLibrary.load(Native Method)
at java.lang.ClassLoader.loadLibrary0(ClassLoader.java:1822)
at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1702)
at java.lang.Runtime.load0(Runtime.java:770)
at java.lang.System.load(System.java:1005)
at com.apple.cocoa.foundation.NSRuntime.loadLibrary(NSRuntime.java:127)
Revert back to Java6 and the problems disappeared.
Solution:
Reverting back to Java5 for a particular application was about the only suggestion I’ve seen that has actually worked.
Fortunately, you should be able to apply the change directly to the problem application’s Info.plist and not system-wide. Best of both worlds in a way.
Using the Spark example:
Edit /Applications/Spark.app/Contents/Info.plist and change the value associated with the JVMVersion key to be 1.5 instead of 1.5+.
Similar to a JNLP file, this will result in the runtime falling back to 1.5.0.