I Use This!
Activity Not Available

News

Analyzed 4 months ago. based on code collected 4 months ago.
Posted over 12 years ago by Blee
What's New in Apache Sqoop 1.4.0-incubatingApache Sqoop recently celebrates its first incubator release, version 1.4.0-incubating.  There are several new features and improvements added in this release.  This post will cover some of those interesting ... [More] changes.  Sqoop is currently undergoing incubation at The Apache Software Foundation.  More information on this project can be found at http://incubator.apache.org/sqoop.Customized Type Mapping (SQOOP-342)Sqoop is equipped with a default mapping from most SQL types to appropriate Java or Hive counterparts during import.  Even though, this one-mapping-fits-all approach might not be ideal in all scenarios considering a wide variety of data stores available today, not to mention there are certain vendor-specific SQL types that may not be covered by the default mapping.To allow customized type mapping, two new arguments, map-column-java and map-column-hive, are introduced for changing mapping to Java and Hive, respectively.  The list of mapping is expected in the form of <column name>=<target type>, such as$ sqoop import ... --map-column-java id=Integer,name=StringFor the above example, the columns id and name will be mapped to Java Integer and String, respectively.Boundary Query Support (SQOOP-331)Sqoop uses a canned query (select min(<split column>), max(<split column>) from <table name>) to determine boundaries for creating splits in all cases by default.  This query may not always be the most optimal one however.  Hence, to provide flexibility for using different queries based on distinct usages, a new boundary-query argument is provided to take any arbitrary query returning two numeric columns for the same purpose of creating splits.Date/Time Incremental Append (SQOOP-321)Incremental import in Sqoop can be used to only retrieve those rows with the value of a check column beyond a certain threshold.  The threshold needs to be the maximum value of the check column (in append mode) or the timestamp (in lastmodified mode) at the end of last import.Previously, in append mode, the check column has to be in numeric type.  If a date/time type is desired, the user has to manually select the maximum value out of the date/time column and then specify that value as the last-value argument in lastmodified mode instead.  As part of this release, now the check column can be in date/time type as well.Composite Key Update (SQOOP-313)By default, Sqoop export adds new records into a table by INSERT statements.  However, if any record is in conflict with an existing one due to table constraints (such as a unique key), the underlying INSERT statement will fail and the export process will fail.  If an existing record needs to be modified, the update-key argument can be specified and UPDATE statements will be used instead underneath.Before this release, only a single column name can be specified in the update-key argument.  This column name will be used to determine the matching record(s) for update.  However, in many real world situations, multiple columns are required to identify the matching record(s).  Thus, starting from this release, a comma separated list of column names can be given as the update-key argument.Mixed Update/Insert Export (SQOOP-327)As mentioned, Sqoop export can only either insert (by default) or update (with the update-key argument) records into a table.  As a result, one issue is that if data are being inserted, they may cause constraint violations when they exist already.  Another issue is that if data are being updated, they may be silently ignored when there are no matching update keys found.  It lacks the functionality to both update those data with matching update keys and insert those without.A new update-mode argument is introduced to resolve the above issues.  Its value can be either updateonly or allowinsert.  As the name suggests, the difference is those records without matching update keys are simply dropped when the value is updateonly or are inserted when the value is allowinsert.  Note that this feature is currently provided only for built-in Oracle connector.IBM DB2 Support (SQOOP-329)The extensible architecture used by Sqoop allows support for a data store to be added as a so-called connector.  By default, Sqoop comes with connectors for a variety of databases such as MySQL, PostgreSQL, Oracle, and SQL Server.  In addition, there are also third-party connectors available separately from various vendors for several other data stores, such Couchbase, VoltDB, and Netezza.  As part of this release, a new connector is provided to import and export data against IBM DB2 database.The Final ChapterIf you are interested in learning more about the changes, a complete list for Sqoop 1.4.0-incubating can be found here.  You are also encouraged to give this new release a try.  Any help and feedback is more than welcome. For more information on how to report problems and to get involved, visit the Sqoop project website at http://incubator.apache.org/sqoop/. [Less]
Posted over 12 years ago by cos
With more and more people jumping on the bandwagon of big data it is very gratifying to see that Hadoop is gaining momentum every day. Even more fascinating is too see how the idea of putting together a bunch of service components on top of ... [More] Hadoop proper is getting more and more steam. IT and software development professionals are getting better understanding of benefits that a flexible set of loosely coupled yet compatible components provides when one needs to customize a data processing solution at scale. The biggest problem for most businesses trying to add Hadoop infrastructure into their existing IT is a lack of the knowledge, professional support, and/or clear understanding of what's out there on the market to help you. Essentially, Hadoop exists in one incarnation - this is the open-source project under the umbrella of Apache Software Foundation (ASF). This is where all the innovations in Hadoop are coming from. And essentially this is a source of profit for a few commercial offerings. What's wrong with the picture, you might ask? Well, the main issue with most of those offerings are mostly two folds. They are either immature and based on sometimes unfinished nor unreleased Hadoop code, or provide no significant value add compare to Hadoop proper available in source form from hadoop.apache.org. And no matter if any of above (or both of them together) apply to a commercial solution based on Hadoop, you can be sure of one thing: these solutions will cost you literally tons of money - as much as  $1k/node/year in some cases - for what is essentially available for free. "What about neat packages I can get from a commercial provider and perhaps some training too?" one might ask. Well, yeah if you are willing to pay top bucks per node for packaging bugs to get fixed or learn how to install packages on a virtual machine - go ahead by all means. However, keep in mind that you always can get a set of packages for Hadoop produced by another open source project called Bigtop, hosted by Apache. What essentially you get from it are packages for your Linux distro, which can be easily installed on your cluster's nodes. A great benefit is that you can easily trim your Hadoop stack to only include what you need: Hadoop + Hive, or perhaps Hadoop + HBase (which will automatically pick up Zookeper for you). At any rate, the best part of the story isn't a set of packages that can be installed: after all this is what packages are usually being created for, right? The problem with the packages or other forms of component distribution is compatibility: you don't know in advance if A-package will nicely work with B-package v.1.2 unless somebody has tested this assumption. Even then, testing environment might be significantly different from your production environment and then all bets are off. Unless - again - you're willing to pay through your nose to someone who's willing to get it for you. And that's where true miracle of something like BigTop is coming for a rescue. Before I'll explain more, I wanna step back a bit and look upon some recent history. A couple of years ago Yahoo's Hadoop development team had to address an issue of putting together working and well-validated Hadoop stack including a number of components developed by different engineering organizations with their own development schedule and integration criteria. The main integration point of all of the pieces was the operations team which was in charge of a big number of cluster deployments, provisioning and support. Without their own QA staff they were oftentimes at mercy of an assumed code or configuration quality coming from all the corners of the company. Even with a chance of the high quality of all these components there were no guarantees that they will work together as expected once put together on a cluster. And indeed, integration problems were many. That's were a small team of engineers including yours truly who put together a prototype of a system called FIT (Final Integration Testing). The system essentially allowed you to pick up a packaged component you wanted to validate against your cluster environment and perform the deployment, configuration, and testing with the integration scenarios provided by either the component's owner or your own team. The approach was so effective that the project was continued and funded further in the form of HIT (Hadoop Integration Testing). At which point two of us have left for what seemed like a greener pasture back then. We thought the idea was the real deal so we have continued on the path of developing a less custom and more adoptable technology based on open standards such as Maven and Groovy. Here you can find the slides from the talk we gave at eBay about a year ago. The presentation is putting the concept of Hadoop data stack in open writing for the first time, as well as defines stacks customization and validation technology. When this presentation were given we already had well working mechanism of creating, deploying, and validating both packaged and non-packaged Hadoop components. BigTop - open-sourced for the second time just a few months ago and based on our project above - has added up a packaging creation layer on top of the stack validation product. This, of course, makes your life even easier. And even more so with a number of Puppet recipes allowing you to deploy and configure your cluster in highly efficient and automatic manner. I encourage you to check it out. BigTop has been successfully used for validating release of Apache Hadoop 0.20.205 which has become a foundation of Hadoop 1.0.0 Another release of Hadoop - 0.22 - was using BigTop for release candidates validation and so on. On top of "just packages" BigTop now produces ready to go VMs pre-installed with Hadoop stack for different Linux distributions: just download one and instantiate your very own cluster in minutes! We'll tell about it next time. I encourage to check BigTop project, contribute to it your ideas, time, and knowledge! [Less]
Posted over 12 years ago by jsc
I sent out a similar email to our mailing list before Christmas and before I took a short break to relax with my family and friends. But it's maybe worth sharing with a broader audience here on the blog. Let me first tell you something ... [More] about me (Juergen Schmidt=jsc) and to explain the title of this blog. I have been involved in the OpenOffice project since the beginning and have worked on the source code before when I started to work for StarDivision in 1997. So I can for sure argue that I am one of many grandfathers of the OpenOffice project and that the last year or better the last 16 month were not the most brilliant in the long and successful history of the OpenOffice project. A lot of misunderstanding and miscommunication led to confusion by our users and before we start in a challenging new year I would like to share some thoughts with you about the last months, my private expectations, and my wishes for the next year. Oracle's announcement to stop their investment in OpenOffice.org was a shock for me. Well the reason is obvious, I was paid by Oracle and worked on this project. The people who know me from the past know that I am a 100% OpenOffice.org guy and I always appreciated to work on this project and together with our community. I always felt as part of the overall community. I know the reasons that were responsible for the LibreOffice fork and the split of the community and I have to confess that I can understand it. But I didn't like how it was done. If Oracle would have done this step 6 month earlier I am sure we wouldn't have this fork and we wouldn't have this split of the community. We would potentially still have the go-oo fork which was the foundation for LibreOffice but that is something different. Anyway it is as it is at the moment and we will see how it moves forward in the future. The grant to Apache was at least the appropriate signal that OpenOffice.org as a project will never die. The brand is too big and too important, the opportunities around the product and the overall eco-system are great and I am very sure that the project will continue and will be hopefully shining brighter than before. But a lot of work was and still is in front of us. We had to deal with a lot of things in parallel where other derivative projects didn't had to deal with at least not in public. We had to migrate the whole OpenOffice.org infra-structure to Apache and had to ensure that it worked. I think we were very successful here and have migrated nearly everything we need from a technical perspective. Our mission was to migrate as much as possible of the available stuff on www.openoffice.org and at least save it for later use. I think we did it! Thanks to all who made this possible. And we can concentrate in the future on some structural and conceptual redesign of the main portal page www.openoffice.org to provide the information to our users that they need to find the product, to find more information like help, discussion forums, and to find the way in the community if they want to do more etc. We couldn't simply use the code as it was and could continue with the development as in the past because of the different license. A huge challenge that is still ongoing and where I had many problems at the beginning. It is not easy to explain why you remove something and replace it with something new that provide the same functionality but is under a more appropriate license. It's simply boring work and no developer really likes it. But is a prerequisite for Apache and in the end it is better for our eco system because the Apache license is much friendlier for business usage as any other open source license. As an individual developer I don't care too much about all the different open source licenses, as long as the work I do is good for the project and in the end for our users. But I learned that the Apache license can be a door opener for more contributors and more engagement of companies. I think that is important and I am confident that it will help to drive our project forward. And not everything is bad. With the IP cleanup we really cleaned up many things and Armin's replacement for the svg import/export is the best solution we ever had for OpenOffice and with the biggest potential for further improvements. All this is really motivating for the future! Well we had a lot of noise and communication problems on our mailing lists and I think we missed transmitting the message that OpenOffice.org has found a new home under the Apache foundation and we have missed communicating the progress we have made in the pubic. We can do much better in the future! And I am looking forward to working with all of you on this communication part in the future. We don't have to be shy, we work on a great project with a great product and we should have enough to communicate and to share in the public (not only on our mailing list but on all the modern and very useful media like Facebook, Google+, twitter, ...) For the next year I expect that we find our way to guide and control our project a little bit better. I expect our first release early next year and hopefully a second one later in the year where we can show that we are able to drive the project forward and that we are able to create and establish a vibrant and living community. I wish that we can gain trust in the project and in the Apache way and that it is a good move forward. Our users simply want the best free, open source office productivity suite and they don't care about the different licenses. Enterprise users would like to see a huge and working community with the participation of a lot of different companies or at least their employees working on the project. We all know that such a huge and successful project can only work if we have individual community members as well as full-time community members. Important is the WE and the TOGETHER that makes open source projects successful. I heart voices and read emails where people said that Apache is not able to manage such a huge end user oriented project with all the necessary things. A strong statement, isn't it. At the beginning I have to confess that I also had doubts and wasn't sure. But as I have mentioned in an earlier on our mailing list, I have seen and got the necessary signals over time that Apache is willing to listen and is open for changes as well if they make sense for the overall success of our project and if these changes are aligned with the overall Apache principles. And I think that is fair enough for all. The move to Apache is a big challenge for all of us. Apache had many very successful projects but none of the these project has such a huge end-user focus like OpenOffice. And of course OpenOffice is no small or new project. No it is one of biggest and most successful open source projects ever. And the migration was and is not easy. But we the community can do it, we as individuals, everybody can help and we together will do it! And the Apache way and the Apache license have proven in the past and with many successful projects that it is a good way and a good license to achieve this. For our users I wish that press people will do a better job in the future to research facts and stories better or if they prefer to write articles based on first-hand information that they contact the Apache OpenOffice project directly. We are here and can help with information! That will definitely help to avoid further confusion about the future of OpenOffice. Enough from me for now and I hope that I haven't bothered you with my private thoughts. I wish you all a happy new year, enjoy these days, take your own break too, load your batteries for our next challenge in 2012. Regards Juergen [Less]
Posted over 12 years ago by arvind
Apache Flume is currently undergoing incubation at The Apache Software Foundation. More information on this project can be found at http://incubator.apache.org/flume. The next generation of Apache’s log ingestion framework, Flume NG, has ... [More] bolted out of the starting blocks.  Last Friday, employees from a diversity of enterprises packed Cloudera Headquarters to learn more and to contribute to the project themselves.  With over 40 folks attending the hackathon, what’s clear is that major enterprises value reliable, performant data ingestion, and that Flume’s new branch is well-positioned to make that kind of ingest a reality.  During the session, 10 JIRAs were filed, with one resolved, one code review in progress, and four patches in the works.   Cloudera captured the morning session on video and has made it available for members of the flume user and dev lists and for the community at large.  During the morning Teach-In, Flume committers covered NG’s design, the mechanics of contributing to the Apache project, and more.   Get a better understanding of NG’s design by reading Arvind Prabhakar’s recent post, which does a nice job covering the ins and outs of the new framework: https://blogs.apache.org/flume/entry/flume_ng_architecture. Guest Post by Basier Aziz. Photo by Kate Ting and Video by Cloudera. [Less]
Posted over 12 years ago by grobmeier
Dear all, The Apache log4php team is pleased to announce the release of Apache log4php 2.2.0. Significant changes in this release include: a full rewrite of the configuration logic which greatly improves error reporting inline ... [More] configuration using PHP arrays a new web site with better documentation and more examples a new layout: serialized Release notes, including the full change, log can be found here: http://logging.apache.org/log4php/changelog.html The release is available from: http://logging.apache.org/log4php/download.html The site has just been updated so it may take a little while to sync. Thanks to everyone who participated in the making of this release. Best regards, The Apache log4php team [Less]
Posted over 12 years ago by Sally
Earlier this year the OpenOffice.org code base was donated to The Apache Software Foundation. The resulting project, Apache OpenOffice (Incubating) is progressing well as a podling in the Apache Incubator with a rapidly growing community and project ... [More] infrastructure (see http://incubator.apache.org/projects/openofficeorg.html). This open letter seeks to articulate our vision for the future of Apache OpenOffice within the wider Open Document Format ecosystem. With the OpenOffice.org donation, Apache has become a significant part of a global ecosystem that was initially formed more than ten years ago, it includes support for an internationally recognised Open Standard for documents (the Open Document Format developed by the OASIS Open Document Format for Office Applications [OpenDocument] Technical Committee) and at least 13 related open source projects. In addition to being a critical component of the IT industry OpenOffice.org is of significant value to the global user community with approximately 100 million users and over 70 native language packs. In such a large ecosystem it is impossible to agree upon a single vision for all participants, Apache OpenOffice does not seek to define a single vision, nor does it seek to be the only player. Instead we seek to offer a neutral and powerful collaboration opportunity. The permissive Apache License 2.0 reduces restrictions on the use and distribution of our code and thus facilitates a diverse contributor and user base for the benefit of the whole Open Document Format ecosystem. Within an Apache project it is possible to rise above political, social and commercial differences in the pursuit of maximally effective implementations of freely available open standards and related software tools. Our license and open development model is widely recognised as one of the best ways to ensure open standards, such as ODF, gain traction and adoption. Apache OpenOffice offers much more potential for OpenOffice.org than "just" an end-user Microsoft Office replacement. We offer a vendor neutral space in which to collaborate whilst enabling third parties to pursue almost any for-profit or not-for-profit business model. Apache has over 100 world leading projects and over 50 incubating projects. Within these projects we have demonstrated many times over that our model of collaboration is highly successful. Maximum benefit is gained through increased engagement with our communities. While it is possible, and legal, to take our code and work independently of the foundation we believe that collaborating wherever possible strengthens the ecosystem and facilitates progress towards one’s own vision for ODF. Each participant in an Apache project is free to set their own boundaries of collaboration. However, they are not free to use our trademarks in confusing ways. This includes OpenOffice.org and all related marks. To ensure that the use of Apache marks will not lead to confusion about our projects, we must control their use in association with software and related services provided by others. Our trademark policy is clearly laid out at http://www.apache.org/foundation/marks/. Only the Apache Software Foundation can make releases of software that bear our trademarks. The Apache OpenOffice (Incubating) project has tentatively identified the first quarter of 2012 for a Version 3.4 release. As well as clarifying our position in relation to our trademarks we wish to make it clear that no third party has been given approval to solicit donations of any kind on behalf of the Apache Software Foundation or any of its projects, including OpenOffice.org. In general, if a communication does not come to you from a verifiable apache.org address then it is not an official Apache Software Foundation or OpenOffice.org communication. We invite and encourage everyone engaged with the Open Document Format standards to explore opportunities for collaboration with the Apache OpenOffice (Incubating) project. For further information see http://incubator.apache.org/openofficeorg/get-involved.html . # # # [Less]
Posted over 12 years ago by danhaywood
If you're thinking of introducing Apache Isis to your co-workers, you might be interested to know that Isis already has an "Introducing Apache Isis" presentation slide deck (in ODP,  PPTX or PPT, PDF slides or notes).  You are free to use this as you ... [More] will. I've just updated the deck in line with the forthcoming v0.2.0 release; the most significant new content includes an overview of the main use cases for Apache Isis.  The online demo for Isis also gets a link. As ever, feedback welcome! [Less]
Posted over 12 years ago by administrator
A few projects have requested it, now it is here! Check out https://translate.apache.org and get your project added. See also https://cwiki.apache.org/confluence/display/INFRA/translate+pootle+service+auth+levels for more information - you ... [More] will see that general public non-logged in users can submit translate requests whilst any logged in user (i.e. - committers) can process those submissions. Enjoy! - Any queries to the infra team please or file a INFRA Jira ticket. [Less]
Posted over 12 years ago by arvind
Apache Flume is a distributed, reliable, and available system for efficiently collecting, aggregating and moving large amounts of log data from many different sources to a centralized data store. Flume is currently undergoing incubation at The ... [More] Apache Software Foundation. More information on this project can be found at http://incubator.apache.org/flume. Flume NG is work related to new major revision of Flume and is the subject of this post.Prior to entering the incubator, Flume saw incremental releases leading up to version 0.9.4. As Flume became adopted it became clear that certain design choices would need to be reworked in order to address problems reported in the field. The work necessary to make this change began a few months ago under the JIRA issue FLUME-728. This work currently resides on a separate branch by the name flume-728, and is informally referred to as Flume NG. At the time of writing this post Flume NG had gone through two internal milestones - NG Alpha 1, and NG Alpha 2 and a formal incubator release of Flume NG is in the works.At a high-level, Flume NG uses a single-hop message delivery guarantee semantics to provide end-to-end reliability for the system. To accomplish this, certain new concepts have been incorporated into its design, while certain other existing concepts have been either redefined, reused or dropped completely.In this blog post, I will describe the fundamental concepts incorporated in Flume NG and talk about it’s high-level architecture. This is a first in a series of blog posts by Flume team that will go into further details of it’s design and implementation. Core ConceptsThe purpose of Flume is to provide a distributed, reliable, and available system for efficiently collecting, aggregating and moving large amounts of log data from many different sources to a centralized data store. The architecture of Flume NG is based on a few concepts that together help achieve this objective. Some of these concepts have existed in the past implementation, but have changed drastically. Here is a summary of concepts that Flume NG introduces, redefines, or reuses from earlier implementation: Event: A byte payload with optional string headers that represent the unit of data that Flume can transport from it’s point of origination to it’s final destination. Flow: Movement of events from the point of origin to their final destination is considered a data flow, or simply flow. This is not a rigorous definition and is used only at a high level for description purposes. Client: An interface implementation that operates at the point of origin of events and delivers them to a Flume agent. Clients typically operate in the process space of the application they are consuming data from. For example, Flume Log4j Appender is a client. Agent: An independent process that hosts flume components such as sources, channels and sinks, and thus has the ability to receive, store and forward events to their next-hop destination. Source: An interface implementation that can consume events delivered to it via a specific mechanism. For example, an Avro source is a source implementation that can be used to receive Avro events from clients or other agents in the flow. When a source receives an event, it hands it over to one or more channels. Channel: A transient store for events, where events are delivered to the channel via sources operating within the agent. An event put in a channel stays in that channel until a sink removes it for further transport. An example of channel is the JDBC channel that uses a file-system backed embedded database to persist the events until they are removed by a sink. Channels play an important role in ensuring durability of the flows. Sink: An interface implementation that can remove events from a channel and transmit them to the next agent in the flow, or to the event’s final destination. Sinks that transmit the event to it’s final destination are also known as terminal sinks. The Flume HDFS sink is an example of a terminal sink. Whereas the Flume Avro sink is an example of a regular sink that can transmit messages to other agents that are running an Avro source. These concepts help in simplifying the architecture, implementation, configuration and deployment of Flume. Flow PipelineA flow in Flume NG starts from the client. The client transmits the event to it’s next hop destination. This destination is an agent. More precisely, the destination is a source operating within the agent. The source receiving this event will then deliver it to one or more channels. The channels that receive the event are drained by one or more sinks operating within the same agent. If the sink is a regular sink, it will forward the event to it’s next-hop destination which will be another agent. If instead it is a terminal sink, it will forward the event to it’s final destination. Channels allow the decoupling of sources from sinks using the familiar producer-consumer model of data exchange. This allows sources and sinks to have different performance and runtime characteristics and yet be able to effectively use the physical resources available to the system. Figure 1 below shows how the various components interact with each other within a flow pipeline. Figure 1: Schematic showing logical components in a flow. The arrows represent the direction in which events travel across the system. This also illustrates how flows can fan-out by having one source write the event out to multiple channels. By configuring a source to deliver the event to more than one channel, flows can fan-out to more than one destination. This is illustrated in Figure 1 where the source within the operating Agent writes the event out to two channels - Channel 1 and Channel 2. Conversely, flows can be converged by having multiple sources operating within the same agent write to the same channel. A example of the physical layout of a converging flow is show in Figure 2 below. Figure 2: A simple converging flow on Flume NG. Reliability and Failure Handling Flume NG uses channel-based transactions to guarantee reliable message delivery. When a message moves from one agent to another, two transactions are started, one on the agent that delivers the event and the other on the agent that receives the event. In order for the sending agent to commit it’s transaction, it must receive success indication from the receiving agent. The receiving agent only returns a success indication if it’s own transaction commits properly first. This ensures guaranteed delivery semantics between the hops that the flow makes. Figure 3 below shows a sequence diagram that illustrates the relative scope and duration of the transactions operating within the two interacting agents. Figure 3: Transactional exchange of events between agents. This mechanism also forms the basis for failure handling in Flume NG. When a flow that passes through many different agents encounters a communication failure on any leg of the flow, the affected events start getting buffered at the last unaffected agent in the flow. If the failure is not resolved on time, this may lead to the failure of the last unaffected agent, which then would force the agent before it to start buffering the events. Eventually if the failure occurs when the client transmits the event to its first-hop destination, the failure will be reported back to the client which can then allow the application generating the events to take appropriate action. On the other hand, if the failure is resolved before the first-hop agent fails, the buffered events in various agents downstream will start draining towards their destination. Eventually the flow will be restored to its original characteristic throughput levels. Figure 4 below illustrates a scenario where a flow comprising of two intermediary agents between the client and the central store go through a transient failure. The failure occurs between agent 2 and the central store, resulting in the events getting buffered at the agent 2 itself. Once the failing link has been restored to normal, the buffered events drain out to the central store and the flow is restored to its original throughput characteristics. Figure 4: Failure handling in flows. In (a) the flow is normal and events can travel from the client to the central store. In (b) a communication failure occurs between Agent 2 and the event store resulting in events being buffered on Agent 2. In (c) the cause of failure was addressed and the flow was restored and any events buffered in Agent 2 were drained to the store. Wrapping up In this post I described the various concepts that are a part of Flume NG and its high-level architecture. This is first of a series of posts from the Flume team that will highlight the design and implementation of this system. In the meantime, if you need anymore information, please feel free to drop an email on the project’s user or developer lists, or alternatively file the appropriate JIRA issues. Your contribution in any form is welcome on the project. Links: Project Website: http://incubator.apache.org/flume/ Flume NG Getting Started Guide: https://cwiki.apache.org/confluence/display/FLUME/Getting+Started Mailing Lists: http://incubator.apache.org/flume/mail-lists.html Issue Tracking: https://issues.apache.org/jira/browse/FLUME IRC Channel: #flume on irc.freenode.net [Less]
Posted over 12 years ago by danhaywood
We've finally got around to putting together an online demo of Apache Isis for would-be users to quickly grok what Isis  is all about. If you don't fancy reading any further but just want to play, you can find it here. If you're ... [More] still with me, you'll find that the online demo shows how Isis can dynamically generate a (human-usable) webapp and a (machine-usable) RESTful API from the same domain object model. The REST API is that defined by the Restful Objects spec. The online demo bundles its own documentation, which shows the full source code for the domain model (all 3 classes) along with guidance on how to use Chrome (or similar) extensions to play with the REST API directly from your browser. Feedback very welcome on the isis-dev mailing list. [Less]