20
I Use This!
Activity Not Available

News

Posted over 13 years ago
As I had to do an HTTP request under android, I started to write the following code using the apache HTTP client API (included in android) : // create and initialize the apache http client final HttpClient httpClient = ......; // create and ... [More] initialize the http request HttpPost httppost = ......; ResponseHandler rh = new ResponseHandler() { public String handleResponse(HttpResponse response) throws ClientProtocolException, IOException { HttpEntity entity = response.getEntity(); if (entity != null) { StringBuilder out = new StringBuilder(); byte[] b = EntityUtils.toByteArray(entity); out.append(new String(b, 0, b.length)); return out.toString(); } else { return ""; } } }; String responseString = httpClient.execute(httppost, rh); After a problem, I discovered the server returned an http error 403 (access forbidden). So, I added the highlighted code below in order to manage the case it doesn’t return OK : // create and initialize the apache http client final HttpClient httpClient = ......; // create and initialize the http request HttpPost httppost = ......; ResponseHandler rh = new ResponseHandler() { public String handleResponse(HttpResponse response) throws ClientProtocolException, IOException { if (response.getStatusLine().getStatusCode() == HttpStatus.SC_OK) { HttpEntity entity = response.getEntity(); if (entity != null) { StringBuilder out = new StringBuilder(); byte[] b = EntityUtils.toByteArray(entity); out.append(new String(b, 0, b.length)); return out.toString(); } else { return ""; } } else { StatusLine sl = response.getStatusLine(); String message = "HTTP error " + sl.getStatusCode() + " Reason: " + sl.getReasonPhrase(); throw new HttpErrorException(message); } } }; String responseString = httpClient.execute(httppost, rh); remark : I have created the HttpErrorException class that inherits from IOException to stay compliant with the handleResponse method specifications from the ResponseHandler interface. After that modification, the caller must catch HttpErrorException to process specifically the case when the server doesn’t return OK. Bookmark and Share More »Powered by Bookmarkify™ [Less]
Posted over 13 years ago
A while ago I wrote an IL disassembler to test IKVM.Reflection. Both as a correctness test and also to test if the API surface exposes enough of the underlying information. I thought it would make a good ... [More] IKVM.Reflection example (although the code won't win any awards, it's a bit of a hack). One nice feature is that it tries really hard to emit the same output file as the .NET ildasm to make comparison easier. There's even a command line option to match a specific ildasm version (2.0, 4.0 or 4.5) and its quirks. The binaries are available in ikdasm-v0.1-binaries.zip and the Visual Studio 2010 solution in ikdasm-v0.1.zip. Note that in its current form the ildasm compatibility mode only works on Windows, because it needs to P/Invoke _gcvt in msvcrt.dll to make sure the floating point numbers match the ildasm output. [Less]
Posted over 13 years ago
LaternaMagica 0.4 got released.The Image viewer and slideshow tool for GNUstep (and your Macintosh).Enjoy better navigation, keyboard shortcuts and better mass-exporting tool Enjoy it on more platforms too.
Posted over 13 years ago
At Lang.NEXT I met someone who was interested in using IKVM.Reflection and after he started porting his code, he ran into some missing functionality in IKVM.Reflection, so I've made some improvements there. I've ... [More] also fixed the remaining known issues with access stubs and "unloadable" (missing types) custom modifiers. Changes: Fix for recently introduced bug (with access stub rewrite). Bug #3512589. Changed build process to fall back to the NAnt task, if we can't find the resource compiler. Made WinForms message loop thread creation lazy to hopefully allow more applications to run without message loop thread. This is a (partial) workaround for bug #3515033. Changed ikvmc to read input files after processing all the options (to make -nowarn: and -warnaserror: options that follow the file names work for warnings produced during input file reading). Added support for type 2 access stubs for constructors. Bug fix. When an unloadable type is used in a method signature that overrides a method (or implements an interface method), the custom modifier must be the same as the base class or an override stub must be generated. Added partial implementation of ThreadMXBean. IKVM.Reflection: Bug fix. When writing an assembly that has a .netmodule, the TypeDefId field in the ExportedType in the manifest module should contain a TypeDef token instead of an index. IKVM.Reflection: Bug fix. When exporting a nested type (via AssemblyBuilder.__AddTypeForwarder()), we should also set the namespace (in practice it is unlikely for a nested type to have a namespace, but is is possible). IKVM.Reflection: Corrected a couple of method parameter names in Assembly. IKVM.Reflection: Added Assembly.GetType(string,bool,bool) method. IKVM.Reflection: Added support for case-insensitive type and member lookup. IKVM.Reflection: Implemented case insensitive lookup in Type.GetInterface(). IKVM.Reflection: Moved GetEvents(), GetFields(), GetConstructors(), GetNestedTypes() and GetProperties() to a common implementation that fixes a number of bugs. IKVM.Reflection: Fixed GetMethods() to properly filter out base class methods that have been overridden. IKVM.Reflection: Moved member lookup by name to a common implementation that fixes a number of bugs and adds IgnoreCase support. IKVM.Reflection: Added MemberInfo.ReflectedType. IKVM.Reflection: Added Binder support for method and property lookup. IKVM.Reflection: Bug fix. ParameterBuilder.Position should return the 1-based position passed in to DefineParameter, not the 0-based ParameterInfo.Position. IKVM.Reflection: Changed Type.__ContainsMissingType to return true for generic type parameters that have constraints that return true for __ContainsMissingType. IKVM.Reflection: Bug fix. It should be possible to import a function pointer type into a ModuleBuilder. IKVM.Reflection: Implemented Assembly.ToString(). IKVM.Reflection: Added [Flags] attribute to ResourceLocation enum. IKVM.Reflection: Added support for reading/querying manifest resources that are forwarded to another assembly. IKVM.Reflection: Bug fix. Module.GetManifestResourceStream() should return null (instead of throwing FileNotFoundException) for non-existing resource names. Binaries available here: ikvmbin-7.1.4491.zip [Less]
Posted over 13 years ago
With Java / Swing, each showXxxDialog method in the JOptionPane class creates a modal dialog box waiting for a user action. It’s modal because the user can’t do any action outside this one. Let’s take the following example : int response = ... [More] JOptionPane.showConfirmDialog(null, "Do you want to delete this file ?", "Confirmation", JOptionPane.YES_OPTION); if (response == JOptionPane.YES_NO_OPTION) { // ... action if response = Yes } We can see the call is synchronous, which means the method returns a value that allow to process the response immediately. Under Android, dialog boxes are also modals. But the processing of the response must be done asynchronously. So, one must use a different programming strategy, based on callback. In the example of a Yes/No dialog box, a callback for the Yes answer is needed and eventually another one for the No answer. The code for Android looks like this : Activity activity = .....; // find the current Activity AlertDialog.Builder builder = new AlertDialog.Builder(activity); builder.setMessage("Do you want to delete this file ?"); // definition of the callback for the Yes answer builder.setPositiveButton("Yes", new DialogInterface.OnClickListener() { @Override public void onClick(DialogInterface dialog, int which) { // ... action if answer = Yes } }); builder.show(); // doesn't return user's choice // after this line, the user may not have already answered // to the question (because the call is asynchronous) In order to make the code a bit clearer, I create an abstract class called Command : abstract public class Command implements DialogInterface.OnClickListener { @Override public final void onClick(DialogInterface dialog, int which) { execute(); dialog.dismiss(); } abstract public void execute(); } I also write the following method : public void displayYesNoDialog(Activity context, int messageId, Command yesCommand) { AlertDialog.Builder builder = new AlertDialog.Builder(context); builder.setMessage(context.getString(messageId)); builder.setPositiveButton("Yes", yesCommand); builder.show(); } I use it like that : Activity activity = .....; // find the current Activity int messageId = .....; // find the message identifier Command yesCommand = new Command() { public void execute() { // ... action if answer = Yes } displayYesNoDialog(activity, messageId, yesCommand); Bookmark and Share More »Powered by Bookmarkify™ [Less]
Posted over 13 years ago
Funny to see Tap the Waterdroplet (the GNU Classpath mascot) used in court to explain what Java is: Tap makes a couple more cameo appearances in the documents. It is a fun read.
Posted almost 14 years ago
Here are some somewhat random musings on building the jdk and various build system observations. It might be observed that some of these may sound like whining, I can assure you that whining is not allowed in my blog, only constructive criticism ... [More] , it's everyone else that is whining. :^) Apologies for the length. Build and test of the JDK has multiple dimensions, and I cannot say that I have them all covered here, it is after all, just a blog, and I am not endowed with any supernatural guidance here. Continuous Build & Smoke Test Every component, every integration area, every junction point or merge point should be constantly built and smoke tested. I use the term smoke test because completely testing a product can take many weeks, and many tests do not lead themselves to being fully and reliably automated. The job of complete testing belongs to the testing organization. Smoke tests should provide a reasonable assurance that a product is not brain dead and had no major flaws to prevent further testing later on down the road. These smoke tests should be solid reliable tests, and any failure of these tests should signify a flaw in the changes to the product and raise major red flags for the individuals or teams that integrated any recent changes. Over the last year or more we have been trying to identify these tests for the jdk and it's not an easy task. Everyone cuts corners for the sake of productivity, it's just important to cut those corners with eyes wide open. The ideal would be that every changeset was known to have passed the build and smoke test, the reality is far less than that, but we know where we want to be. Build and Test Machines The hardware/machine resources for a build and test system is cheap, and a bargain if it keeps all developers shielded from bad changes, finding issues as early in the development process as possible. But it is also true that hardware/machine resources do not manage themselves, so there is also an expense to managing the systems, some of it can be automated but not everything. Virtual machines can provide benefits here, but they also introduce complications. Continuous Integration Depending on who you talk to, this can mean a variety of things. If it includes building and smoke testing before the changes are integrated, this is a fantastic thing. If people consider this to mean that developers should continuously integrate changes without any verifications whatsoever that the changes work and don't cause regressions, that could be a disaster. Some so called 'Wild West' projects purposely want frequent integrations with little or no build and test verifications. Granted, for a small tight team, going 'Wild West' can work very well, but not when the lives of innocent civilians are at risk. Wild West projects must be contained, all members must agile, wear armor, and be willing to accept the consequences of arbitrary changes sending a projectile through your foot. Multiple Platforms The JDK is not a pure Java project and builds must be done on a set of different platforms. When I first started working on the JDK, it became obvious to me that this creates a major cost to the project, or any project. Expecting all developers to have access to all types of machines, be experienced with all platforms, and to take the time to build and test manually on all of them is silly, you need some kind of build and test system to help them with this. Building on multiple platforms (OS releases or architectures) is hard to setup, regardless of the CI system used, this is a significant issue that is often underestimated. Typically the CI system wants to try and treat all systems the same and the fact of the matter is, they are not, and somewhere these differences have to be handled very carefully. Pure Linux projects, or pure Windows projects will quickly become tethered to that OS and the various tools on them. Sometimes that tethering is good, sometimes not. Multiple Languages Again, the JDK is not a pure Java project, many build tools try and focus on one language or set of languages. Building a product that requires multiple languages, where the components are tightly integrated, is difficult. Pure Java projects and pure C/C++ projects have a long list of tools and build assists in creating the resulting binaries. Less so for things like the JDK, where not only do we have C/C++ code in the JVM, but C/C++ code in the various JNI libraries, and C/C++ code in JVM agents (very customized). The GNU make tool is great for native code, the Ant tool is great for small Java projects, but there aren't many that work well in all cases. Picking the right tools for the JDK build is not a simple selection. Multiple Compilers Using different C/C++ compilers requires a developer to be well aware of the limitations of all the compilers, and to some degree if the Java code is also being compiled by different Java compilers, the same awareness is needed. This is one of the reasons that builds and tests on all platforms is so important and also why changing compilers, even just new versions of the same compiler can make people paranoid. Partial Builds With the JDK we have a history of doing what we call partial builds. The hotspot team rarely builds the entire jdk, but instead just builds hotspot (because that is the only thing they changed) and then places their hotspot in a vetted jdk image that was built by the Release Engineering team at the last build promotion. Dito for the jdk teams that don't work on hotspot, they rarely build hotspot. This was and still is considered a developer optimization, but is really only possible because of the way the JVM interfaces to the rest of the jdk, it rarely changes. To some degree, successful partial builds can indicate that the changes have not created an interface issue and can be considered somewhat 'compatible'. These partial builds create issues when there are changes in both hotspot and the rest of the jdk, where both changes need to be integrated at the same time, or more likely, in a particular order, e.g. hotspot integrates a new extern interface, later the jdk team integrates a change that uses or requires that interface, ideally after the hotspot changes have been integrated into a promoted build so everyone's partial builds have a chance of working. The partial builds came about mostly because of build time, but also because of the time and space needed to hold all the sources of parts of the product you never really needed. I also think there is a comfort effect by a developer not having to even see the sources to everything he or she doesn't care about. I'm not convinced that the space and time of getting the sources is that significant anymore, although I'm sure I would get arguments on that. The build speed could also become less of an issue as the new build infrastructure speeds up building and makes incremental builds work properly. But stay tuned on this subject, partial builds are not going away, but it's clear that life would be less complicated without them. Build Flavors Similar to many native code projects we can build a product, or a debug, or a fastdebug version. My term for these is build flavors. My goal in the past is to make sure that the build process stays the same, and it's just the flavor that changes. Just like ice cream. ;^) (fastdebug == -O -g + asserts). Plug and Play Relates to build flavors, it has been my feeling that regardless of the build flavor, the API's should not change. This allows for someone to take an existing product build, replace a few libraries with their debug versions, and run tests that will run with the best performance possible, and be able to debug in the area of interest. This cannot happen if the debug or fastdebug versions have different APIs, like MSVCRTD.DLL and MSVCRT.DLL. Mercurial Probably applies to Git or any distributed Source Code Management system too. Face it, DSCM's are different. They provide some extremely powerful abilities over single repository model SCM's, but they also create unique issues. The CI systems typically want to treat these SCM systems just like SVN or CVS, in my opinion that is a mistake. I don't have any golden answers here, but anyone that has or does work with a distributed SCM, will struggle with CI systems that treat Mercurial like subversion. The CI systems are not the only ones. Many tools seem to have this concept of a single repository that holds all the changes, when in reality with a DSCM, the changes can be anywhere, and may or may not become part of any master repository. Nested Repositories Not many projects have cut up the sources like the OpenJDK. There were multiple reasons for it, but it often creates issues for tools that either don't understand the concept of nested repositories, or just cannot handle them. It is not clear at this time how this will change in the future, but I doubt they will go away. But it has been observed by many that the lack of bookkeeping with regards to the state of all the repositories can be an issue. The build promotion tags may not be enough to track how all the repository changeset states line up with built together. Managing Build and Test Dependencies Some build and test dependencies are just packages or products installed on a system, I've often called those "system dependencies". But many are just tarballs or zip bundles that needs to be placed somewhere and referred to. In my opinion, this is a mess, we need better organization here. Yeah yeah, I know someone will suggest Maven or Ivy, but it may not be that easy. We will be trying to address this better in the future, no detailed plans yet, but we must fix this and fix it soon. Resolved Bugs and Changesets Having a quick connection between a resolved bug and the actual changes that fixed it is so extremely helpful that you cannot be without this. The connection needs to be both ways too. It may be possible to do this completely in the DSCM (Mercurial hooks), but in any case it is really critical to have that easy path between changes and bug reports. And if the build and test system has any kind of archival capability, also to that job data. Automated Testing Some tests cannot be automated, some tests should not be automated, some automated tests should never be run as smoke tests, some smoke tests should never have been used as smoke tests, some tests can seriously mess up automation and even the system being used, ... No matter what, automating testing is not easy. You cannot treat testing like building, it has unique differences that cannot be ignored. If you want the test runs to be of the most benefit to a developer, you cannot stop on the first failure, you need to find all the failures. That failure list may be the evidence that links the failures to the change causing the failures, e.g. only tests using -server fail, or only tests on X64 systems fail, etc. At the same time, it is critical to drive home the fact that the smoke tests "should never fail", it is a slippery slope to start allowing smoke tests to fail. Sometimes, you need hard and fast rules on test failures, the smoke tests are those. If accepting failing smoke tests is a policy, that same policy needs to exclude the failing smoke test for everyone else so that life can go on for everyone else. In an automated build and test system, you have to protect yourself from the tests polluting the environment or the system and impacting the testing that follows it. Redirection of the user.home and java.io.tmpdir properties can help, or at least making sure these areas are consistent freom test run to test run. Creating a separate and unique DISPLAY for X11 systems can also protect your test system from being impacted by automated GUI tests that can change settings of the DISPLAY. Distributed Builds Unless you can guarantee that all systems used are producing the exact same binary bits, in my opinion, distributed builds are unpredictable and therefore unreliable. A developer might be willing to accept this potential risk, but a build and test system cannot, unless it has extremely tight control over the systems in use. It has been my experience that parallel compilations (GNU make -j N) on systems with many CPUs is a much preferred and more reliable way to speed up builds. However, if there are logically separate builds or sub-builds that can be distributed to different systems, that makes a great deal of sense. Having the debug, fastdebug, and product builds done on separate machines is a big win. Cutting up the product build can create difficult logistics in terms of pulling it all together into the final build image. Distributed Tests Having one large testbase, and one large testlist, and requiring one very long testrun that can take multiple hours is not ideal. Generally, you want to make the testbase easy to install anywhere, and create batches of tests, so that using multiple machines of the same OS/arch can allow for the tests to be run in a distributed way. Getting the same testing done in a fraction of the time of running one large batch. If the batches are too small, you spend more time on test setup than running the test. The goal should be to run the smoke tests as fast as possible in the most efficient way possible, and more systems to test with should translate into the tests getting done sooner. This also allows for new smoke test additions as the tests run faster and faster. Unlike distributed building, the testing is not creating the product bits, and even if the various machines used are slightly different, that comes closer to matching the real world anyway. Ideally you would want to test all possible real world configurations, but we all know how impractical that is. Killing Builds and Tests At some point, you need to be able to kill off a build or test, probably many builds and many tests on many different systems. This can be easy on some systems, and hard with others. Using Virtual Machines or ghosting of disk images provides a chance of just system shutdowns and restarts with a pristine state, but that's not simple logic to get right for all systems. Automated System Updates Having systems do automatic updates while builds and tests are running is insane. The key to a good build and test system is reliability, you cannot have that if the system you are using is in a constant state of flux. System updates must be contained and done on a schedule that prevents any disturbance to the build and tests going on in the systems. It is completely unacceptable to change the system during a build or test. AV Software AV software can be extremely disturbing to the performance of a build and test system. It is important, but must be done in a way that preserves the stability and reliability of the build and test system. The dynamic AV scanning is a great invention, but has the potential to disturb build and test processes in very negative ways. Hopefully this blabbering provided some insights into the world of JDK build and test. -kto [Less]
Posted almost 14 years ago
When I first wanted to create a field of date type, I expected to find a ready to use component under Android. It’s not exactly the case ! There is a component called DatePicker for selecting a date and a dialog box called DatePickerDialog that ... [More] contains it. But there is no associated TextView (or EditText) component to visualize the choosen date and open the dialog box when the user touch it. By using the hello-datepicker tutorial as a basis, I have written the following small utility class in order to have a reusable component : public class DatePickerHelper { private static final int DATE_DIALOG_ID = 0; private final Activity activity; private TextView textView; public DatePickerHelper(Activity activity) { this.activity = activity; } public void init(final TextView textView) { this.textView = textView; textView.setOnTouchListener(new OnTouchListener() { @Override public boolean onTouch(View v, MotionEvent event) { if (textView.isEnabled()) { activity.showDialog(DATE_DIALOG_ID); return true; // processed=true } else { return false; // processed=false } } }); } public void setTextFieldValue(Date date) { textView.setText(DateUtils.formatDate(date)); } public Dialog createDialog(int id) { DatePickerDialog.OnDateSetListener listener = new DatePickerDialog.OnDateSetListener() { @Override public void onDateSet(DatePicker view, int year, int monthOfYear, int dayOfMonth) { textView.setText(DateUtils.formatDate(year, monthOfYear, dayOfMonth)); } }; if (DATE_DIALOG_ID == id) { DateRecord date = parseDate(); return new DatePickerDialog(activity, listener, date.year, date.monthOfYear, date.dayOfMonth); } return null; } public Date parseTextFieldValue() { DateRecord date = parseDate(); Calendar c = Calendar.getInstance(); c.set(Calendar.YEAR, date.year); c.set(Calendar.MONTH, date.monthOfYear); c.set(Calendar.DAY_OF_MONTH, date.dayOfMonth); return c.getTime(); } private DateRecord parseDate() { return DateUtils.parseDate(textView.getText().toString()); } } Remarks : I have not put the code for the DateRecord class : it’s a simple bean that contains year, monthOfYear and dayOfMonth fields. The DateUtils class, also missing in this article, is doing conversion between a String and a DateRecord. The DatePickerHelper class can be used this way in the Android activity with a date field : public class MyActivity extends Activity { private TextView birthDateTextView; private final DatePickerHelper dateHelper = new DatePickerHelper(this); @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); .......... birthDateTextView = (TextView) findViewById(R.id.birthDateTextView); dateHelper.init(birthDateTextView); .......... } @Override protected Dialog onCreateDialog(int id) { Dialog dialog = dateHelper.createDialog(id); if (dialog == null) { dialog = super.onCreateDialog(id); } return dialog; } public void refreshModelFromView(Model model) { .......... model.setBirthDate(dateHelper.parseTextFieldValue()); .......... } public void refreshViewFromModel(Model model) { .......... dateHelper.setTextFieldValue(model.getBirthDate()); .......... } } Here is ! Now, when you touch the text field, a dialog box appear and you can choose a date. Bookmark and Share More »Powered by Bookmarkify™ [Less]
Posted almost 14 years ago
In the last episode of the Java Spotlight podcast, we interviewed Donald Smith, Director of Product Management for Oracle.You can grab just this episode or fetch the whole feed.
Posted almost 14 years ago
A little while ago, the OpenJDK twitter feed reached 5000 followers, about three months after reaching 4000 by the end of last year. Thanks for following along on the journey towards JDK 8!