79
I Use This!
High Activity

News

Analyzed 1 day ago. based on code collected 1 day ago.
Posted about 18 years ago by Ola Bini
The last 5 weeks, a team consisting of me, Alexey Verkhovsky, Matt Wastrodowski and Toby Tripp from ThoughtWorks, and Rich Manalang from Oracle have created a new application based on an internal Oracle application. This site is called Oracle Mix ... [More] , and is aimed to be the way Oracles customers communicate with Oracle and each other, suggesting ideas, answering each others questions and generally networking.Why is this a huge deal? Well, for me personally it's really kinda cool... It's the first public JRuby on Rails site in existance. It's deployed on the "red stack": Oracle Enterprise Linux, Oracle Application Server, Oracle Database, Oracle SSO, Oracle Internet Directory. And JRuby on Rails.It's cool. Go check it out: http://mix.oracle.com. [Less]
Posted about 18 years ago by Ola Bini
Last week I attended QCon San Francisco, a conference organized by InfoQ and Trifork (the company behind JAOO). It must admit that I was very positively surprised. I had expected it to be good, but I was blown away by the quality of most ... [More] presentations. The conference had a system where you rated sessions by handing in a green, yellow or red card - I think I handed in two yellow cards, and the rest was green.Everything started out with tutorials. I didn't go to the first tutorial day, but the second day tutorial was my colleagues Martin Fowler and Neal Ford talking about Domain Specific Languages, so I decided to attend that. All in all it was lots of very interesting material. Sadly, I managed to get slightly food poisoned from the lunch, so I didn't stay the whole day out.On Wednesday, Kent Beck started the conference proper with a really good keynote on why Agile development really isn't anything else than the way the world expects software development to happen nowadays. It's clear to see that the Agile way provides many of the ilities that we have a responsibility to deliver. A very good talk.After that Richard Gabriel delivered an extremely interesting presentation on how to think about ultralarge, self sustaining systems, and how we must shift the way we think about software to be able to handle large challenges like this.The afternoons sessions was dominated by Brian Goetz extremely accomplished presentation on concurrency. I really liked seeing most of the knowledge available right now into a 45 minute presentation, discussion most of the things we as programmers need to think about regarding concurrency. I am so glad other people are concentrating on these hard problems, though - concurrency scares me.The panel on the future of Java was interesting, albeit I didn't really agree with some of the conclusions Rod Johnson and Josh Bloch arrived at.The day was capped by Richard Gabriel doing a keynote called 50 in 50. I'm not sure keynote is the right word. A poem, maybe? Or just a performance. It was very memorable, though. And beautiful. It's interesting that you can apply that word to something that discusses different programming languages, but there you have it.During the Thursday I was lazy and didn't attend as many sessions as I did on the Wednesday. I saw Charles doing the JRuby presentation, Neal Ford discussing DSLs again, and my coworker Jim Webber rant about REST, SOA and WDSL. (Highly amusing, but beneath the hilarious surface Jim definitely had something very important to say about how we build Internet applications. I totally agree. Read his blog for more info.)The Friday was also very good, but I missed the session about Second Life architecture which seemed very interesting. Justin Gehtland talked about CAS and OpenID in Rails, both solutions that I think is really important, and have their place in basically any organization. Something he said that rang especially true with me is that a Single Sign-On architecture isn't just about security - it's a way to make it easier to refactor your applications, giving you the possibility to combine or separate applications at will. Very good. Although it was scary to see the code the Ruby CAS server uses to generate token IDs. (Hint, it's very easy to attack that part of the server.Just to strike a balance I had to satisfy my language geekery by attending Erik Meijer's presentation on C#. It was real good fun, and Erik didn't get annoyed at the fact that me and Josh Graham interrupted him after more or less every sentence, with new questions.Finally, I saw half of Obie's talk about the new REST support in Rails 2.0 (and he gave me a preview copy of his book - review forthcoming). There is lots of stuff there that can really make your application so much easier to code. Nice.The day ended with two panels, first me, Charles, Josh Susser, Obie and James Cox talking about Rails, the future of the framework and some about the FUD that inevitably happens.The final panel was Martin Fowler moderating me, Erik Meijer, Aino Vonge Corry and Dan Pritchett, talking about the things we had seen at the conference. The discussion ranged from large scale architecture down to concurrency implementations. Hopefully the audience were satisfied.All in all, an incredibly good time. [Less]
Posted about 18 years ago
I read the blog entry Exploring JRuby Syntax Trees With Scala, Part 1 and it tickled me enough to port it to JRuby.  So here it is...A little commentary on it first:I used jruby's builtin 'jruby' module.  This gives me access to JRuby::parse(content) ... [More] which returns a single tree structure of the AST.  This is nice since it does not really depend on any unpublished internal APIs. I did not spend any time making this pretty but the AST the parse method gives back actually includes extra information which could be used to great affect.  For example, it includes line number, start offset, and end offset where the AST node takes place.  This information could be used to make a pure-JRuby editor for example (I am not daring anyone to try it either :) )Raw Swing offends me...Using Cheri, Profligacy, or MonkeyBars probably would have been a better choice, but it was a quick port.The code:require 'jruby'import javax.swing.JFrameimport java.awt.BorderLayoutclass JRubyTreeModel include javax.swing.tree.TreeModeldef initialize(script_content)super();@root = JRuby::parse(script_content, "", true)enddef getRoot; @root; enddef isLeaf(node); node.childNodes.empty?; enddef getChildCount(parent); parent.childNodes.size; enddef getChild(parent, index); parent.childNodes[index]; enddef getIndexOfChild(parent, child); parent.childNodes.index child; enddef valueForPathChanged; enddef addTreeModelListener(listener); enddef removeTreeModelListener(listener); endendtree_visualizer = javax.swing.JTree.newinput = javax.swing.JTextArea.newframe = JFrame.new "JRuby AST test"frame.setDefaultCloseOperation(JFrame::EXIT_ON_CLOSE);frame.set_size 640,450content = frame.content_panecontent.setLayout(BorderLayout.new)content.add(tree_visualizer, java.awt.BorderLayout::CENTER)bottom_panel = javax.swing.JPanel.newbottom_panel.setLayout BorderLayout.newcontent.add(bottom_panel, BorderLayout::SOUTH)bottom_panel.add(javax.swing.JScrollPane.new(input), BorderLayout::CENTER)input.setText("x=1\ny = 2\n")tree_visualizer.model = JRubyTreeModel.new input.getTextbutton = javax.swing.JButton.new "Parse!"button.addActionListener() { tree_visualizer.model = JRubyTreeModel.new input.getText}bottom_panel.add(button, BorderLayout::SOUTH)frame.visible = true [Less]
Posted over 18 years ago by Charles Oliver Nutter
As many of you know, Ruby was created in Japan by Yukihiro Matsumoto, and most of the core development team is still Japanese to this day. This has posed a serious problem for the Ruby community, since the language barrier between the Japanese core ... [More] team and community and the English-speaking community is extremely high. Only a few members of the core team can speak English comfortably, so discussions about the future of Ruby, bug fixes, and new features happens almost entirely on the Japanese ruby-dev mailing list. That leaves those of us English speakers on the ruby-core mailing list out in the cold.We need a two-way autotranslator.Yes, we all know that automated translation technology is not perfect, and that for East Asian languages it's often barely readable. But even having partial, confusing translations of the Japanese emails would be better than having nothing at all, since we'd know that certain topics are being discussed. And English to JP translators do a bit better than the reverse direction, so core team members interested in ruby-core emails would get the same benefit.I imagine this is also part of the reason Rails has not taken off as quickly in Japan as it has in the English-speaking world: the Rails core team is peopled primarily by English speakers, and the main Rails lists are all in English. Presumably, an autotranslating gateway would be useful for many such communities.But here's the problem: I know of no such service.There are multiple translation services, for free and for pay, that can handle Japanese to some level. Google Translate and Babelfish are the two I use regularly. But these only support translating a block of text or a URL entered into a web form. There also does not appear to be a Google API for Translate, so screen-scraping would be the only option at present.The odd thing about this is that autotranslators are good enough now that there could easily be a generic translation service for dozens of languages. Enter in source and target languages, source and target mailing lists, and it would busily chew through mail. For closely-related European languages, autotranslators do an extremely good job. And just last night I translated a Chinese blog post using Google Translate that ended up reading as almost perfect English. The time is ripe for such a service, and making it freely available could knock down some huge barriers between international communities.So, who's going to set it up first and grab the brass ring (or is there a service I've overlooked)? [Less]
Posted over 18 years ago by Charles Oliver Nutter
Oh yes, you all know you love the Alioth Shootout.Isaac Gouy has updated the JRuby numbers, and modified the default comparison to be with Ruby 1.8.6 rather than with Groovy as it was before. And true to form, JRuby is faster than Ruby on 14 out of ... [More] 18 benchmarks.There are reasons for all four benchmarks that are slower:pidigits is simply too short for JRuby to hit its full stride. Alioth runs it with n = 2500, which on my system doing a simple "time" results in JRuby taking 11 seconds and Ruby taking 5. If I bump that up to 5000, JRuby takes 27 seconds to Ruby's 31.regex-dna and recursive-complement are both hitting the Regexp performance problem we have in JRuby 1.0.x and in the 1.1 beta. We expect to have that resolved for 1.1 final, and Ola Bini and Marcin Mielczynski are each developing separate Regexp engines to that end.startup, beyond being a touch unfair for a JVM-based language right now, is actually about half our fault and half the JVM's. The JVM half is the unpleasantly high cost of classloading, and specifically the cost of generating many small temporary classes into their own classloaders, as we have to do in JRuby to avoid leaking classes as we JIT and bind methods at runtime. The JRuby half is the fact that we're loading and generating so many classes, most of them too far ahead of time or that will never be used. So there's blame to go around, but we'll never have Ruby's time for this.Standard disclaimer applies about reading too much into these kinds of benchmarks, but it's certainly a nice set of numbers to wake up to. [Less]
Posted over 18 years ago by Charles Oliver Nutter
A bit ago, I was catching up on my feeds and noticed that Neal Gafter had announced the first prototype of Java closures. I've been a fan of the BGGR proposal, so I thought I'd catch up on the current status and try applying it to a pain point in the ... [More] JRuby source: the compiler.The current compiler is made up of two halves: the AST walker and the bytecode emitter. The AST walker recursively walks the AST, calling appropriate methods on a set of interfaces into the bytecode emitter. The bytecode emitter, in turn, spits out appropriate bytecodes and calls back to the AST walker. Back and forth, the AST is traversed and all nested structures are assembled appropriately into a functional Java method.This back and forth is key to the structure and relative simplicity of the compiler. Take for example the following method in ASTCompiler.java, which compiles a Ruby "next" keyword (similar to Java's "continue"):public static void compileNext(Node node, MethodCompiler context) { context.lineNumber(node.getPosition()); final NextNode nextNode = (NextNode) node; ClosureCallback valueCallback = new ClosureCallback() { public void compile(MethodCompiler context) { if (nextNode.getValueNode() != null) { ASTCompiler.compile(nextNode.getValueNode(), context); } else { context.loadNil(); } } }; context.pollThreadEvents(); context.issueNextEvent(valueCallback);}First, the "lineNumber" operation is called on the MethodCompiler, my interface for primary bytecode emitter. This emits bytecode for line number information based on the parsed position in the Ruby AST.Then we get a reference to the NextNode passed in.Now here's where it gets a little tricky. The "next" operation can be compiled in one of two ways. If it occurs within a normal loop, and the compiler has an appropriate jump location, it will compile as a normal Java GOTO operation. If, on the other hand, the "next" occurs within a closure (and not within an immediately-enclosing loop), we must initiate a non-local branch operation. In short, we must throw a NextJump.In Ruby, unlike in Java, "next" can take an optional value. In the simple case, where "next" is within a normal loop, this value is ignored. When a "next" occurs within a closure, the given value becomes the local return from that invocation of the closure. The idea is that you might write code like this, where you want to do an explicit local return from a closure rather than let the return value "fall off the end":def fooputs "still going" while yieldenda = 0foo {next false if a > 4; a = 4; true}...which simply prints "still going" four times.The straightforward way to compile this non-local "next" would be to evaluate the argument, construct a NextJump object, swap the two so we can call the NextJump(IRubyObject value) constructor with the given value, and then raise the exception. But that requires us to juggle values around all the time. This simple case doesn't seem like such a problem, but imagine the hundreds or thousands of nodes the compiler will handle for a given method, all spending at least part of their time juggling stack values around. It would be a miserable waste.So the compiler constructs a poor-man's closure: an anonymous inner class. The inner class implements our "ClosureCallback" interface which has a single method "compile" accepting a single MethodCompiler parameter "context". This allows the non-local "next" bytecode emitter to first construct the NextJump, then ask the AST compiler to continue processing AST nodes. The compiler walks the "value" node for the "next" operation, again causing appropriate bytecode emitter calls to be made, and finally we have our value on the stack, exactly where we want it. We continue constructing the NextJump and happily toss it into the ether.The final line of the compileNext method initiates this process.So what would this look like with the closure specification in play? We'll simplify it with a function object.public static void compileNext(Node node, MethodCompiler context) { context.lineNumber(node.getPosition()); final NextNode nextNode = (NextNode) node; ClosureCallback valueCallback = { MethodCompiler => context if (nextNode.getValueNode() != null) { ASTCompiler.compile(nextNode.getValueNode(), context); } else { context.loadNil(); } }; context.pollThreadEvents(); context.issueNextEvent(valueCallback);}That's starting to look a little cleaner. Gone is the explicit "new"ing of a ClosureCallback anonymous class, along with the superfluous "compiler" method declaration. We're also seeing a bit of magic outside the function type: closure conversion. Our little closure that accepts a MethodCompiler parameter is being coerced into the appropriate interface type for the "valueCallback" variable.How about a more complicated example? Here's a much longer method from JRuby that handles "operator assignment", or any code that looks like a = b:public static void compileOpAsgn(Node node, MethodCompiler context) { context.lineNumber(node.getPosition()); // FIXME: This is a little more complicated than it needs to be; // do we see now why closures would be nice in Java? final OpAsgnNode opAsgnNode = (OpAsgnNode) node; final ClosureCallback receiverCallback = new ClosureCallback() { public void compile(MethodCompiler context) { ASTCompiler.compile(opAsgnNode.getReceiverNode(), context); // [recv] context.duplicateCurrentValue(); // [recv, recv] } }; BranchCallback doneBranch = new BranchCallback() { public void branch(MethodCompiler context) { // get rid of extra receiver, leave the variable result present context.swapValues(); context.consumeCurrentValue(); } }; // Just evaluate the value and stuff it in an argument array final ArrayCallback justEvalValue = new ArrayCallback() { public void nextValue(MethodCompiler context, Object sourceArray, int index) { compile(((Node[]) sourceArray)[index], context); } }; BranchCallback assignBranch = new BranchCallback() { public void branch(MethodCompiler context) { // eliminate extra value, eval new one and assign context.consumeCurrentValue(); context.createObjectArray(new Node[]{opAsgnNode.getValueNode()}, justEvalValue); context.getInvocationCompiler().invokeAttrAssign(opAsgnNode.getVariableNameAsgn()); } }; ClosureCallback receiver2Callback = new ClosureCallback() { public void compile(MethodCompiler context) { context.getInvocationCompiler().invokeDynamic( opAsgnNode.getVariableName(), receiverCallback, null, CallType.FUNCTIONAL, null, false); } }; if (opAsgnNode.getOperatorName() == "||") { // if lhs is true, don't eval rhs and assign receiver2Callback.compile(context); context.duplicateCurrentValue(); context.performBooleanBranch(doneBranch, assignBranch); } else if (opAsgnNode.getOperatorName() == "&&") { // if lhs is true, eval rhs and assign receiver2Callback.compile(context); context.duplicateCurrentValue(); context.performBooleanBranch(assignBranch, doneBranch); } else { // eval new value, call operator on old value, and assign ClosureCallback argsCallback = new ClosureCallback() { public void compile(MethodCompiler context) { context.createObjectArray(new Node[]{opAsgnNode.getValueNode()}, justEvalValue); } }; context.getInvocationCompiler().invokeDynamic( opAsgnNode.getOperatorName(), receiver2Callback, argsCallback, CallType.FUNCTIONAL, null, false); context.createObjectArray(1); context.getInvocationCompiler().invokeAttrAssign(opAsgnNode.getVariableNameAsgn()); } context.pollThreadEvents();}Gods, what a monster. And notice my snarky comment at the top about how nice closures would be (it's really there in the source, see for yourself). This method obviously needs to be refactored, but there's a key goal here that isn't addressed easily by currently-available Java syntax: the caller and the callee must cooperate to produce the final result. And in this case that means numerous closures.I will spare you the walkthrough on this, and I will also spare you the one or two other methods in the ASTCompiler class that are even worse. Instead, we'll jump to the endgame:public static void compileOpAsgn(Node node, MethodCompiler context) { context.lineNumber(node.getPosition()); // FIXME: This is a little more complicated than it needs to be; // do we see now why closures would be nice in Java? final OpAsgnNode opAsgnNode = (OpAsgnNode) node; ClosureCallback receiverCallback = { MethodCompiler context => ASTCompiler.compile(opAsgnNode.getReceiverNode(), context); // [recv] context.duplicateCurrentValue(); // [recv, recv] }; BranchCallback doneBranch = { MethodCompiler context => // get rid of extra receiver, leave the variable result present context.swapValues(); context.consumeCurrentValue(); }; // Just evaluate the value and stuff it in an argument array ArrayCallback justEvalValue = { MethodCompiler context, Object sourceArray, int index => compile(((Node[]) sourceArray)[index], context); }; BranchCallback assignBranch = { MethodCompiler context => // eliminate extra value, eval new one and assign context.consumeCurrentValue(); context.createObjectArray(new Node[]{opAsgnNode.getValueNode()}, justEvalValue); context.getInvocationCompiler().invokeAttrAssign(opAsgnNode.getVariableNameAsgn()); }; ClosureCallback receiver2Callback = { MethodCompiler context => context.getInvocationCompiler().invokeDynamic( opAsgnNode.getVariableName(), receiverCallback, null, CallType.FUNCTIONAL, null, false); }; // eval new value, call operator on old value, and assign ClosureCallback argsCallback = { MethodCompiler context => context.createObjectArray(new Node[]{opAsgnNode.getValueNode()}, justEvalValue); }; if (opAsgnNode.getOperatorName() == "||") { // if lhs is true, don't eval rhs and assign receiver2Callback.compile(context); context.duplicateCurrentValue(); context.performBooleanBranch(doneBranch, assignBranch); } else if (opAsgnNode.getOperatorName() == "&&") { // if lhs is true, eval rhs and assign receiver2Callback.compile(context); context.duplicateCurrentValue(); context.performBooleanBranch(assignBranch, doneBranch); } else { context.getInvocationCompiler().invokeDynamic( opAsgnNode.getOperatorName(), receiver2Callback, argsCallback, CallType.FUNCTIONAL, null, false); context.createObjectArray(1); context.getInvocationCompiler().invokeAttrAssign( opAsgnNode.getVariableNameAsgn()); } context.pollThreadEvents();}There's two things I'd like you to notice here. First, it's a bit shorter as a result of the literal function objects and closure conversion. It's also a bit DRYer, which naturally plays into code reduction. Second, there's far less noise to contend with. Rather than having a minimum of five verbose lines to define a one-line closure (for example), we now have three terse ones. We've managed to tighten the focus to the lines of code we're actually interested in: the bodies of the closures.Of course this quick tour doesn't get into the much wider range of features that the closures proposal contains, such as non-local returns. It also doesn't show closures being invoked because with closure conversion many existing interfaces can be represented as function objects automatically.I'll be looking at the closure proposal a bit more closely, and time permitting I'll try to get a simple JRuby prototype compiler wired up using the techniques above. I'd recommend you give it a try too, and offer Neal your feedback. [Less]
Posted over 18 years ago by Charles Oliver Nutter
I've posted about TIOBE here before.The TIOBE Programming Community index gives an indication of the popularity of programming languages. The index is updated once a month. The ratings are based on the world-wide availability of skilled engineers ... [More] , courses and third party vendors. The popular search engines Google, MSN, Yahoo!, and YouTube are used to calculate the ratings. Observe that the TIOBE index is not about the best programming language or the language in which most lines of code have been written.I noticed this month that Ruby has moved up to #9 in the list, passing JavaScript. Also noted in the November Newsflash is that Ruby is currently the front runner to win "programming language of the year" for the second year in a row, closely followed by D and C#.TIOBE Programming Community Index [Less]
Posted over 18 years ago by Charles Oliver Nutter
The latest craze at conferences, especially those associated with O'Reilly or Ruby, is the game Werewolf (historically known as Mafia, but Werewolf has become more popular). The premise of the game is simple: (from Wikipedia's Mafia article)Mafia ... [More] (also known under the variant Werewolf or Vampire) is a party game modeling a battle between an informed minority and an uninformed majority. Mafia is usually played in groups with at least five players. During a basic game, players are divided into two teams: 'Mafia members', who know each other; and 'honest people', who generally know the number of Mafia amongst them. The goal of both teams is to eliminate each other; in more complicated games with multiple factions, this generally becomes "last side standing".Substitute "villagers" for "honest people" and "werewolves" for "mafia members" and you get the general idea. They're the same game.Again from Wikipedia:Mafia was created by Dimma Davidoff at the Psychological Department of Moscow State University, in spring of 1986, and the first players were playing in classrooms, dorms, and summer camps of Moscow University. [citation needed] The game became popular in other Soviet colleges and schools and in the 1990s it began to be played in Europe (Hungary, Poland, England, Norway) and then the United States. Mafia is considered to be one of "the 50 most historically and culturally significant games published since 1800" by about.com.So you get the idea.It's a fun game. I first played it at Foo Camp 2007, and I've played at each Ruby-related conference since then. It's a great way to get to know people, and an interesting study in social dynamics. I have my own opinions about strategy and in particular a fatal flaw in the game, but I'll leave those for another day. Instead, I have a separate concern:Is Werewolf killing the conference hackfest?Last year at RubyConf, Nick Sieger and I sat up until 4AM with Eric Hodel, Evan Phoenix, Zed Shaw and others hacking on stuff. Eric showed a little Squeak demo for those of us who hadn't used it, Zed and I talked about getting a JRuby-compatible Mongrel release out (which finally happened a year later) and I think we all enjoyed the time to hack with a few others who code for the love of coding.As another example, RubyGems, the definitive and now official packaging system for Ruby apps and libraries, was written during a late-night hackfest at an early RubyConf (2002?). I'm not remembering my history so well, but I believe Rake had a similar start, as well as several other projects including the big Rails 1.0 release's final hours.This year, and for the past several conferences, there's been a significant drop in such hackfests. And it's because of Werewolf.Immediately after the Matz's keynote last night, many of the major Ruby players sequestered themselves in isolated groups to play Werewolf for hours (and yes, I know many did not). They did not write the next RubyGems. They did not plant the seeds of the next great Ruby web framework. They did not advance the Ruby community. They played a game.Don't get me wrong...I have been tempted to join every Werewolf game I can. I enjoy the game, and I feel like I'm at least competent at it. And I can appreciate wanting to blow off steam and play a game after a long conference day. I often feel the same way.But I'm worried about the general trend. Not only does Werewolf seem to be getting more popular, it seems to draw in many of the best and brightest conference attendees, attendees who might otherwise be just as happily hacking on the "hard problems" we still face in the software world.So what do you think? Is this a passing trend, or is it growing and spreading? Does it mean the doom of the late-night, post-conference hackfest and the inspirational products it frequently produces? Or is it just a harmless pastime for a few good folks who need an occasional break? [Less]
Posted over 18 years ago by Charles Oliver Nutter
I'm at RubyConf this weekend, the A-list gathering of Ruby dignitaries. Perhaps one day I'll be considered among them.At any rate, here's the top five questions I get asked. Perhaps this will help people move directly on to more interesting subjects ... [More] :)#1: Are you Charles NutterYes, I am. I look like that picture up in the corner of my blog.#2: How's life been at Sun/How's Sun been treating you?Coming to Sun has been the best professional experience of my life. When they took us in over a year ago, many worried that they'd somehow ruin the project or steer it in the wrong direction (and I had my occasional concerns as well). The truth is that they've remained entirely hands-off with regards to JRuby while simultaneously putting a ton of effort into NetBeans Ruby support and committing mission-critical internal projects to running on JRuby. Beyond that, I've become more and more involved in the growing effort to support many languages on the JVM, through changes to Java and the JVM specification, public outreach, and cultivating and assisting language-related projects. It's been amazing.Along with this question I often get another: "What's it like to work at Sun?" Unfortunately, this one is a little hard for me to answer. Tom Enebo and I work from our homes in Minneapolis, so "working at Sun" basically means hacking on stuff in my basement for 16 hours a day. In that regard, it's like...great.#3 What's next for JRuby?Usually this comes from people who haven't really been following JRuby much, and that's certainly understandable. We had our 1.0 release just five months ago, and we've had two maintenance releases since then plus a major release coming out in beta this weekend. So there's a lot to keep track of.JRuby 1.0 was released in RC form at JavaOne 2007 and in final form at Ruby Kaigi 2007 in Tokyo. It was focused largely on compatibility, but had early bits of the Ruby-to-bytecode compiler (perhaps 25% complete). Over the summer, we simultaneously worked on improving compatibility (adding missing features, fixing bugs) and preparing for another major release (with me spending several months working on the compiler and various performance optimizations). That leads us to JRuby 1.1, coming out in beta form this weekend. JRuby 1.1 represents a huge milestone for JRuby and for Ruby in general. It's now safe to say it's the fastest way to run Ruby code in a 1.8-compatible environment. It includes the world's first Ruby compiler for a general-purpose VM, and the first complete and working 1.8-compatible compiler of any kind. And it has finally reached our largest Rails milestone to date: it runs Rails faster than Ruby 1.8.x does.What comes after JRuby 1.1? Well, we need to finally clean up our association/integration with the other half of JRuby: Java support. Java integration works great right now--probably good enough for 90% of users. But JRuby still doesn't quite fit into the Java ecosystem as well as we'd like. Along with continuing performance and compatibility work, JRuby 2.0 should finally run the last mile of Java integration. Then, maybe we'll be "done"?#4 What do you think of [Ruby implementation X]?I usually hear this as "what bad things do you want to say about the other implementations", even though people probably don't intend it that way. So, for the record, here's my current position on the big five alternative Ruby implementations.Rubinius: Evan and I talk quite a bit about implementation design, challenges of implementing Ruby, and so on. JRuby uses Rubinius's specs for testing, and Rubinius borrows/ports many of our tests for their specs. I've even made code contributions to Rubinius and Sun sponsored the Ruby Hackfest in Denver this past September. We're quite cozy.Technical opinion? It's a great idea to implement Ruby in Ruby...I'd rather use Ruby than Java in JRuby whenever possible. However the challenge of making it perform well is a huge one. When all your core classes are implemented in Ruby, you need your normal Ruby execution to be *way* faster than Ruby 1.8 to have comparable application performance...like orders of magnitude faster. Think about it this way: if each core class method makes ten calls to more core class methods, and each of those methods calls ten native methods (finally getting to the VM kernel), you've got 100x more method invocations than an implementation with a "thicker" kernel that's immediately a native call. Now the situation probably isn't that bad in Rubinius, and I'm sure Evan and company will manage to overcome performance challenges, but it's a difficult problem to solve writing a VM from scratch.We all like difficult problems, don't we?XRuby: XRuby is the "other" implementation of Ruby for the JVM. Xue Yong Zhi took the approach of writing a compile-only implementation, but that's probably the only major difference from the JRuby approach. XRuby has similar core class structure to JRuby, and when I checked a few months ago, it could run some benchmarks faster than JRuby. Although it's still very incomplete, it's a good implementation that's probably not getting enough attention.Xue and I talk occasionally on IM about implementation challenges. I believe we've both been able to help each other.Ruby.NET: Ruby.NET is the first implementation of Ruby for the CLR, and by far the most complete. Wayne Kelly and John Gough from the University of Queensland ran this Microsoft-funded project when it was called "Gardens Point Ruby.NET Compiler", and Wayne has continued leading it in its "truly open source" project form on Google Code. I exchanged emails many times with Wayne about strategies for opening up the project, and I involved myself in many of the early discussions on their mailing list. I'm also, oddly enough, listed as a project owner on the project page. I'd like to see them succeed...more Ruby options means Ruby gains more validation on all platforms, and that means JRuby will be more successful. Ironic, perhaps, that long-term Ruby and JRuby success depends on all implementations doing well...but I believe that to be true.IronRuby: IronRuby is the current in-progress implementation of Ruby for the CLR, primarily being developed by Microsoft's John Lam and his team. IronRuby is still very early in their implementation, but they've already achieved one big milestone: they have their Ruby core classes available as a public, open-source project on RubyForge, and you, dear reader, can actually contribute. In my book, it qualifies as the first "real" open source project out of Microsoft since it finally recognizes that OSS is a two-way street.Technically, IronRuby is also an interesting project because it's stretching Microsoft's DLR in some pretty weird directions. DLR was originally based on IronPython, and it sounds like there's been growing pains supporting Ruby. Ruby is a much more complicated language to implement than Python, and from what I've heard the IronRuby team has had to diverge from DLR in a few areas (perhaps temporarily). But it also sounds like IronRuby is helping to expand DLR's reach, which is certainly good for MS and for folks interested in the DLR.John and I talk at conferences, and I monitor and participate in the IronRuby mailing list and IRC channel. We Rubyists are a close-knit bunch.Ruby 1.9.1/YARV: Koichi Sasada has done the impossible. He's implemented a bytecode VM and compiler for Ruby without altering the memory model, garbage collector, or core class implementations. And he's managed to deliver targeted performance improvements for Ruby that range from two times to ten times faster. It's an amazing achievement.Ruby 1.9.1 will finally come out this Christmas with Koichi's YARV as the underlying execution engine. I talk with Koichi on IRC frequently, and I've been studying his optimizations in YARV for ideas on how to improve JRuby performance in the future. I've also offered suggestions on Koichi's Fiber implementation, resulting in him specifying a core Fiber API that can safely be implemented in JRuby as well. Officially, Ruby 1.9.1 is the standard upgrade path for Rubyists interested in moving on from Ruby 1.8 (and not concerned about losing a few features). JRuby will support Ruby 1.9 execution in the near future, and we already have a partial YARV VM implementation. JRuby and YARV will continue to evolve together, and I expect we'll be cooperating closely in the future.#5 How's JRuby's Performance?This is the first question that finally gets into interesting details about JRuby. JRuby's performance is great, and improving every day. Straight-line execution is now typically 2-4x better than Ruby 1.8.6. Rails whole-app performance ranges from just slightly slower to slightly better, though there have been reports of specific apps running many times faster. There's more work to do across the board, but I believe we've finally gotten to a point where we're comfortable saying that JRuby is the fastest complete Ruby 1.8 implementation available.See also my previous posts and posts by Ola Bini and Nick Sieger for more information on performance. There's more detail there. [Less]
Posted over 18 years ago by Ola Bini
The JRuby community is pleased to announce the release of JRuby 1.0.2.Homepage: http://www.jruby.org/Download: http://dist.codehaus.org/jruby/JRuby 1.0.2 is a minor release of our stable 1.0 branch. The fixes in thisrelease include primarily obvious ... [More] compatibility issues that we felt werelow risk. We periodically push out point releases to continue supportingproduction users of JRuby 1.0.x.Highlights:- Fixed several nasty issues for users on Windows- Fixed a number of network compatibility issues- Includes support for Rails 1.2.5- Reduced memory footprint- Improved File IO performance- trap() fix- 99 total issues resolved since JRuby 1.0.1Special thanks to the new JRuby contributors who rose to Charlie's challengeto write patches for some outstanding bugs: Riley Lynch, Mathias BiilmannChristensen, Peter Brant, and Niels Bech Nielsen. Welcome aboard... [Less]