Posted
over 12 years
ago
by
Sally
Highly-scalable, Open Source "NoSQL" Distributed Database Handles Massive Workloads for Cisco, Constant Contact, DataStax, Digg, IBM, Netflix, Rackspace, Twitter, Walmart Labs, and more.
Forest Hill, MD – 18 October 2011 – The Apache Software
... [More]
Foundation (ASF), the all-volunteer developers, stewards, and incubators of nearly 150 Open Source projects and initiatives, today announced Apache Cassandra™ v1.0. The highly-scalable, distributed "NoSQL" database plays a key role in Cloud computing by quickly handling massive workloads in real time with minimal disruption to services or systems.
"Dealing with very large amounts of data in realtime is a must for most businesses today," said Jonathan Ellis, Vice President of Apache Cassandra. "Cassandra accommodates high query volumes, provides enterprise-grade reliability, and scales easily to meet future growth requirements – while using fewer resources than traditional solutions."
Apache Cassandra is successfully used by large scale organizations such as Cisco, Cloudkick, Digg, Rackspace, Reddit, Twitter, and Walmart Labs to affordably process massive data sets in real-time across large server clusters. The largest Cassandra production cluster to date exceeds 300 terabytes of data over 400 machines.
"As the most-widely deployed mobile rich media advertising platform, Medialets uses Apache Cassandra™ for handling time series based logging from our production operations infrastructure," said Joe Stein, Chief Architect of Medialets. "We store contiguous counts for data points for each second, minute, hour, day, month so we can review trends over time as well as the current real time set of information for tens of thousands of data points. Cassandra makes it possible for us to manage this intensive data set and the release of 1.0 makes it that much easier."
Deployed across an array of applications, from barcode scanning and geospatial databases to storing user account information and activity logs, Apache Cassandra is easily scalable, efficient, and performant, typically handling over 5,000 requests per second per core. Innovative uses of Apache Cassandra include:
AppScale – back-end for Google App Engine applications
Clearspring – tracking URL sharing and serving over 200 million daily view requests
Cloudtalk – creating messaging applications
Constant Contact – powering social media marketing applications
Formspring – counting/storing social graph data for 26 million accounts with 10 million daily responses
Mahalo.com – recording user Q & A activity logs and topics
Netflix – streaming services back-end database
Openwave – distributed storage mechanism for next generation messaging platform
OpenX – storing and replicating advertisements and targeting data for ad delivery over 130 nodes
Plaxo – analyzing 3 billion contacts against public data sources and identifying 600 million unique contacts
RockYou – recording every single click in real time for 50 million online gaming users
Urban Airship – mobile service hosting for over 160 million application installs across 80 million unique devices
Yakaz – storing millions of images and social data
Matthew Conway, CTO of Backupify said, "Apache Cassandra™ makes it possible for us to build a business around really high write loads in a scalable fashion without having to build and operate our own sharding layer. The release of Cassandra 1.0 is an exciting milestone for the project and we look forward to exploring the new features and performance enhancements."
"We utilize Apache Cassandra™ to deliver DataStax Enterprise, a distributed data platform that makes it easy for customers to build, deploy, and operate elastically scalable on-premise and cloud-optimized applications," explained Billy Bosworth, CEO of DataStax. "We chose Cassandra to power this platform because of it's real-time scalability, operational simplicity, and above all, its active community of dedicated developers. Version 1.0 is the culmination of their efforts and we look forward to seeing Cassandra 1.0 power our customers applications."
Originally developed at Facebook in 2008, Cassandra entered the Apache Incubator in 2009, and graduated as an Apache Top-Level Project (TLP) in February 2010. Apache Cassandra v1.0 will be featured in the "Data Handling & Analytics" track at ApacheCon, 7-11 November 2011, in Vancouver, Canada. To register, visit http://apachecon.com/.
Availability and Oversight
Apache Cassandra software is released under the Apache License v2.0, and is overseen by a Project Management Committee (PMC) that guides the Project's day-to-day operations, community development, and product releases. Apache Cassandra source code, downloads, documentation, mailing lists, and related resources are available at http://cassandra.apache.org/.
About The Apache Software Foundation (ASF)
Established in 1999, the all-volunteer Foundation oversees nearly one hundred fifty leading Open Source projects, including Apache HTTP Server -- the world's most popular Web server software. Through the ASF's meritocratic process known as "The Apache Way," more than 350 individual Members and 3,000 Committers successfully collaborate to develop freely available enterprise-grade software, benefiting millions of users worldwide: thousands of software solutions are distributed under the Apache License; and the community actively participates in ASF mailing lists, mentoring initiatives, and ApacheCon, the Foundation's official user conference, trainings, and expo. The ASF is a US 501(3)(c) not-for-profit charity, funded by individual donations and corporate sponsors including AMD, Basis Technology, Cloudera, Facebook, Google, HP, Hortonworks, IBM, Matt Mullenweg, Microsoft, PSW Group, SpringSource/VMware, and Yahoo!. For more information, visit http://www.apache.org/.
"Apache", "Apache Cassandra", and "ApacheCon" are trademarks of The Apache Software Foundation. All other brands and trademarks are the property of their respective owners.
# # # [Less]
|
Posted
over 12 years
ago
by
Sally
On 1 June 2011, Oracle Corporation submitted the OpenOffice.org code base to The Apache Software Foundation. That submission was accepted, and the project is now being developed as a podling in the Apache Incubator under the ASF's meritocratic
... [More]
process informally dubbed "The Apache Way".
OpenOffice.org is now officially part of the Apache family.
The project is known as Apache OpenOffice.org (incubating).
Over its 12-year history, the ASF has welcomed contributions from individuals and organizations alike, but, as a policy, does not solicit code donations. The OpenOffice.org code base was not pursued by the ASF prior to its acceptance into the Apache Incubator.
The Apache OpenOffice.org Podling Project Management Committee (PPMC) and Committer list are nearly 10 times greater than those of other projects in the Apache Incubator, demonstrating the tremendous interest in this project.
As with many highly-visible products, there has been speculation and conjecture about the future of OpenOffice.org at Apache. More recently, destructive statements have been published by both members of the greater FOSS community and former contributors to the original OpenOffice.org product, suggesting that the project has failed during the 18 weeks since its acceptance into the Apache Incubator.
Whilst the ASF operates in the open –our code and project mailing lists are publicly accessible– ASF governance permits for projects to make information and code freely available when the project deems them ready to be released. Apache OpenOffice.org is not at risk.
As an end-user-facing product, OpenOffice.org is unique in comparison to the other nearly 170 products currently being developed, incubated, and shepherded at the ASF. Considered to be "ingredient brands", countless competing Web server, Cloud computing, data handling, and other solutions behind the products serving millions of users worldwide are, unbeknown to most, "Powered by Apache".
And we're OK with that.
More than 70 project Committers are actively collaborating to ensure that the future of the OpenOffice.org code base and community are in alignment with The Apache Way. The project's extensive plans include assessing the elements necessary to update a product that hasn't had an official release in nearly a year; parts of the product's functionality encumbered by non-Apache-Licensed components; and a code base that has been forked and maintained by a community pursuing market dominance. As such, it is critical that we remain pragmatic about the project's next steps during this transition phase.
We understand that stakeholders of a project with a 10+ year history --be they former product managers or casual users-- may be unfamiliar with The Apache Way and question its methods. Those following the project's migration to process and culture unique to the Apache community may challenge the future sustainability of the project.
Such concerns are not atypical with the incubation of Open Source projects with well-established communities -- the successful graduation of Apache Subversion and Apache SpamAssassin, among others, are proof that The Apache Way works.
As an all-volunteer organization, we do not compensate any contributors to develop Apache code. We do, however, support those individuals with relevant expertise to pursue consulting/remuneration opportunities with interested parties, but must reiterate that they are barred from doing so on behalf of the ASF or any Apache initiatives -- be they Top-level Projects (TLPs) or emerging products in the Apache Incubator and Labs. Otherwise, they would be in violation of the Apache trademark policy, which the ASF strongly defends in order to protect its communities.
At the ASF, the answer is openness, not further fragmentation. There is ample room for multiple solutions in the marketplace that are Powered by Apache. We welcome differences of opinion: a requirement at Apache is that a healthy project be supported by an open, diverse community comprising multiple organizations and individual contributors.
We congratulate the LibreOffice community on their success over their inaugural year and wish them luck in their future endeavors. We look forward to opening up the dialogue between Open Document Format-oriented communities to deepen understanding and cease the unwarranted spread of misinformation.
We welcome input and participation in the form of constructive contributions to Apache OpenOffice.org. There are myriad ways to help, from code development and documentation to community relations and "help desk" forums support to licensing and localization, and more.
The way to move this forward is via the ASF, which owns the OpenOffice.org trademark and official code base. This is our chance to be able to pull together our talents towards a cohesive goal and protect the project's ecosystem.
At a minimum, we owe that to the hundreds of millions of users of OpenOffice.org.
-- the ASF Press team and Apache OpenOffice.org incubating mentors
- Join the Apache OpenOffice.org project MeetUp at ApacheCon, 7-11 November 2011 in Vancouver, BC, Canada. For more information, visit http://apachecon.com/- For more on Apache OpenOffice.org see http://incubator.apache.org/openofficeorg/- For more information on the Apache Incubator see http://incubator.apache.org/- The ASF trademark policy can found at http://www.apache.org/foundation/marks/
"Apache", "OpenOffice.org", and "ApacheCon" are trademarks of The Apache Software Foundation. All other brands and trademarks are the property of their respective owners.
# # #
[Less]
|
Posted
over 12 years
ago
by
khirano
I am testing if Japanese language can be shown up beautifully on Title and Content of this Apache Weblog.I am a moderator for the ooo-general-ja mailing list. The day before yesterday I found that a moderator can remotely edit the text files that
... [More]
make up the responses the list sends out. Then I started translating and editing them. I have translated "top" and "bottom" messages.You know the "top" message. It goes like:"Hi! This is the ezmlm program.I'm managing the ooo-o-ooo-AT-i.a-DOT-o mailing list.I'm working for my owner, who can be reachedat ooo-o-ooo-owner-AT-i.a-DOT-o."In Japanese it goes:「こんにちは。私は ezmlm というプログラムです。私は ooo-o-ooo-AT-i.a-DOT-o を管理しています。私はこのメーリングリストのオーナーのために働いています。オーナーの連絡先は次のとおりです。 ooo-o-ooo-owner-AT-i.a-DOT-o.」I like the following part of the "bottom" message:"If despite following these instructions, you do not get the desired results, please contact my owner at ooo-general-ja-owner-AT-incubator.apache-DOT-org. Please be patient, my owner is a lot slower than I am ;-)In Japanese:「ここに書かれた指示に従ったのにもかかわらず、望む結果が得られなかった場合は、ooo-general-ja-owner-AT-incubator.apache-DOT-org にメールを送り、このメーリングリストのオーナーと連絡を取ってください。このメーリングリストのオーナーは人間なので私より反応に時間がかかると思いますが辛抱強くお待ちください ;-)We know ;-) We have to be patient with human beings ;-) [Less]
|
Posted
over 12 years
ago
by
grobmeier
The Apache log4net team is pleased to announce the release of Apache
log4net 1.2.11. The release is available for download at
http://logging.apache.org/log4net/download.html
The Apache log4net library is a tool to help the programmer
... [More]
output log
statements to a variety of output targets. log4net is a port of the
excellent Apache log4j framework to the Microsoft(R) .NET runtime.
log4net 1.2.11 is not only a bugfix release, it also adds support for
.NET 4.0 as well as the client profiles of .NET 3.5 and .NET 4.0.
See the release-notes at
http://logging.apache.org/log4net/release/release-notes.html
for a full list of changes.
Starting with this release log4net uses a new strong name key but we
also provide a binary distribution using the "old" strong name key of
log4net 1.2.10 and earlier. See the FAQ at
http://logging.apache.org/log4net/release/faq.html#two-snks
for details.
The binary distributions no longer contain assemblies built for the
Compact Framework 1.0 or the Shared Source CLI - you can build those
yourself using the source distribution.
Stefan Bodewig on behalf of the log4net community [Less]
|
Posted
over 12 years
ago
by
Sally
The Apache Software Foundation congratulates Apache Subversion, the popular Open Source version control system and software configuration management tool.
The award-winning project entered the Apache Incubator in November 2009 and became a
... [More]
Top-level Project (TLP) twelve months later. Apache Subversion is used by millions of companies around the globe, as well as by leading Apache projects such as Ant and Maven, as part of their overall application lifecycle management strategy.
In addition, Subversion is widely used throughout the Open Source community, including CodePlex, Django, FreeBSD, Free Pascal, GCC, Google Code, MediaWiki, Mono, PHP, Ruby, and SourceForge. According to Forrester Research, Apache Subversion is the recognized leader in the Standalone Software Configuration and Change Management category.
For the complete list of features, release notes, downloads, documentation and supporting information, as well as ways to participate in the project, please see http://subversion.apache.org/
# # # [Less]
|
Posted
over 12 years
ago
by
arvind
Apache Sqoop - Overview
Using Hadoop for analytics and data processing requires loading data into clusters and processing it in conjunction with other data that often resides in production databases across the
... [More]
enterprise. Loading bulk data into Hadoop from production systems or accessing it from map reduce applications running on large clusters can be a challenging task. Users must consider details like ensuring consistency of data, the consumption of production system resources, data preparation for provisioning downstream pipeline. Transferring data using scripts is inefficient and time consuming. Directly accessing data residing on external systems from within the map reduce applications complicates applications and exposes the production system to the risk of excessive load originating from cluster nodes.
This is where Apache Sqoop fits in. Apache Sqoop is currently undergoing incubation at Apache Software Foundation. More information on this project can be found at http://incubator.apache.org/sqoop.Sqoop allows easy import and export of data from structured data stores such as relational databases, enterprise data warehouses, and NoSQL systems. Using Sqoop, you can provision the data from external system on to HDFS, and populate tables in Hive and HBase. Sqoop integrates with Oozie, allowing you to schedule and automate import and export tasks. Sqoop uses a connector based architecture which supports plugins that provide connectivity to new external systems. What happens underneath the covers when you run Sqoop is very straightforward. The dataset being transferred is sliced up into different partitions and a map-only job is launched with individual mappers responsible for transferring a slice of this dataset. Each record of the data is handled in a type safe manner since Sqoop uses the database metadata to infer the data types. In the rest of this post we will walk through an example that shows the various ways you can use Sqoop. The goal of this post is to give an overview of Sqoop operation without going into much detail or advanced functionality.
Importing Data
The following command is used to import all data from a table called ORDERS from a MySQL database:
---$ sqoop import --connect jdbc:mysql://localhost/acmedb \ --table ORDERS --username test --password ****---In this command the various options specified are as follows:
import: This is the sub-command that instructs Sqoop to initiate an import.
--connect <connect string>, --username <user name>, --password <password>: These are connection parameters that are used to connect with the database. This is no different from the connection parameters that you use when connecting to the database via a JDBC connection.
--table <table name>: This parameter specifies the table which will be imported.
The import is done in two steps as depicted in Figure 1 below. In the first Step Sqoop introspects the database to gather the necessary metadata for the data being imported. The second step is a map-only Hadoop job that Sqoop submits to the cluster. It is this job that does the actual data transfer using the metadata captured in the previous step.
Figure 1: Sqoop Import Overview
The imported data is saved in a directory on HDFS based on the table being imported. As is the case with most aspects of Sqoop operation, the user can specify any alternative directory where the files should be populated. By default these files contain comma delimited fields, with new lines separating different records. You can easily override the format in which data is copied over by explicitly specifying the field separator and record terminator characters. Sqoop also supports different data formats for importing data. For example, you can easily import data in Avro data format by simply specifying the option --as-avrodatafile with the import command.
There are many other options that Sqoop provides which can be used to further tune the import operation to suit your specific requirements.
Importing Data into Hive
In most cases, importing data into Hive is the same as running the import task and then using Hive to create and load a certain table or partition. Doing this manually requires that you know the correct type mapping between the data and other details like the serialization format and delimiters. Sqoop takes care of populating the Hive metastore with the appropriate metadata for the table and also invokes the necessary commands to load the table or partition as the case may be. All of this is done by simply specifying the option --hive-import with the import command.----$ sqoop import --connect jdbc:mysql://localhost/acmedb \ --table ORDERS --username test --password **** --hive-import----When you run a Hive import, Sqoop converts the data from the native datatypes within the external datastore into the corresponding types within Hive. Sqoop automatically chooses the native delimiter set used by Hive. If the data being imported has new line or other Hive delimiter characters in it, Sqoop allows you to remove such characters and get the data correctly populated for consumption in Hive.
Once the import is complete, you can see and operate on the table just like any other table in Hive.
Importing Data into HBase
You can use Sqoop to populate data in a particular column family within the HBase table. Much like the Hive import, this can be done by specifying the additional options that relate to the HBase table and column family being populated. All data imported into HBase is converted to their string representation and inserted as UTF-8 bytes. ----$ sqoop import --connect jdbc:mysql://localhost/acmedb \ --table ORDERS --username test --password **** \ --hbase-create-table --hbase-table ORDERS --column-family mysql----In this command the various options specified are as follows:
--hbase-create-table: This option instructs Sqoop to create the HBase table.
--hbase-table: This option specifies the table name to use.
--column-family: This option specifies the column family name to use.
The rest of the options are the same as that for regular import operation.
Exporting Data
In some cases data processed by Hadoop pipelines may be needed in production systems to help run additional critical business functions. Sqoop can be used to export such data into external datastores as necessary. Continuing our example from above - if data generated by the pipeline on Hadoop corresponded to the ORDERS table in a database somewhere, you could populate it using the following command:----$ sqoop export --connect jdbc:mysql://localhost/acmedb \ --table ORDERS --username test --password **** \ --export-dir /user/arvind/ORDERS----In this command the various options specified are as follows:
export: This is the sub-command that instructs Sqoop to initiate an export.
--connect <connect string>, --username <user name>, --password <password>: These are connection parameters that are used to connect with the database. This is no different from the connection parameters that you use when connecting to the database via a JDBC connection.
--table <table name>: This parameter specifies the table which will be populated.
--export-dir <directory path>: This is the directory from which data will be exported.
Export is done in two steps as depicted in Figure 2. The first step is to introspect the database for metadata, followed by the second step of transferring the data. Sqoop divides the input dataset into splits and then uses individual map tasks to push the splits to the database. Each map task performs this transfer over many transactions in order to ensure optimal throughput and minimal resource utilization.
Figure 2: Sqoop Export Overview
Some connectors support staging tables that help isolate production tables from possible corruption in case of job failures due to any reason. Staging tables are first populated by the map tasks and then merged into the target table once all of the data has been delivered it.
Sqoop Connectors
Using specialized connectors, Sqoop can connect with external systems that have optimized import and export facilities, or do not support native JDBC. Connectors are plugin components based on Sqoop’s extension framework and can be added to any existing Sqoop installation. Once a connector is installed, Sqoop can use it to efficiently transfer data between Hadoop and the external store supported by the connector.By default Sqoop includes connectors for various popular databases such as MySQL, PostgreSQL, Oracle, SQL Server and DB2. It also includes fast-path connectors for MySQL and PostgreSQL databases. Fast-path connectors are specialized connectors that use database specific batch tools to transfer data with high throughput. Sqoop also includes a generic JDBC connector that can be used to connect to any database that is accessible via JDBC.Apart from the built-in connectors, many companies have developed their own connectors that can be plugged into Sqoop. These range from specialized connectors for enterprise data warehouse systems to NoSQL datastores.
Wrapping Up
In this post you saw how easy it is to transfer large datasets between Hadoop and external datastores such as relational databases. Beyond this, Sqoop offers many advance features such as different data formats, compression, working with queries instead of tables etc. We encourage you to try out Sqoop and give us your feedback. More information regarding Sqoop can be found at:
Project Website: http://incubator.apache.org/sqoop
Wiki: https://cwiki.apache.org/confluence/display/SQOOP
Project Status: http://incubator.apache.org/projects/sqoop.html
Mailing Lists: https://cwiki.apache.org/confluence/display/SQOOP/Mailing+Lists
[Less]
|
Posted
over 12 years
ago
by
khirano
I am a moderator for ooo-general-ja/AT/incubator.apache.org.
I have checked the mail archives on mail-archives.apache.org and found that there are 3 non-English language mailing list on mail-archives.apache.org such as
... [More]
dev-br/AT/spamassassin.apache.org, dev-de/AT/spamassassin.apache.org and axis-user-ja/AT/ws.apache.org.
Maybe axis-user-ja/AT/ws.apache.org is the first Japanese language mailing list on Apache. See one of posts from this archive. It's in Japanese encoded ISO-2022-JP but parts of it is garbled. It was posted on Wed. 01 Dec 2004 06:18:12 GMT.
2 Japanese moderators and 2 Japanese volunteer testers are now testing ooo-general-ja/AT/incubator.apache.org.
I hope Japanese be no garbled :)
[Less]
|
Posted
over 12 years
ago
by
Sally
Groundbreaking,
lightweight, scalable, all-Apache stack ideal for use in
enterprise-grade Cloud applications
The
Apache Software Foundation (ASF), the all-volunteer developers,
stewards, and incubators of nearly 150 Open Source projects and
... [More]
initiatives, today announced that Apache TomEE has obtained
certification as Java EE 6 Web Profile Compatible Implementation.
Making
its certification debut at JavaOne, Apache TomEE (pronounced "Tommy")
is the Java Enterprise Edition of Apache Tomcat (Tomcat + Java EE =
TomEE) that unites several quality Java enterprise projects including
Apache OpenEJB, Apache OpenWebBeans, Apache OpenJPA, Apache MyFaces
and more.
"It
is with great pride that we're announcing Apache TomEE as a certified
implementation of the Java EE 6 Web Profile," said David
Blevins, Vice President of Apache OpenEJB and original co-developer
of TomEE. "Apache TomEE is the newest addition to the Java EE
server space, standing alongside the likes of GlassFish, JBoss, and
Apache Geronimo."
Developers
build applications using Java EE-certified products to ensure
portability across Java Enterprise Edition-compatible solutions.
Apache TomEE is one of only six certified implementations available
to the industry today.
Redefining
Enterprise Cloud; Unifying Communities
The
three core design objectives for TomEE were: 1) do not alter Tomcat;
2) maintain simplicity; and 3) avoid architecture overhead. This
enables developers to quickly and easily build highly performant
lightweight enterprise solutions using leading Apache projects
without the need for complex modifications or customization. Apache
TomEE's integration of Apache OpenWebBeans, Apache MyFaces, Apache
ActiveMQ, Apache OpenJPA, and Apache CXFis simple, to-the-point, and
focused on the singular task of delivering the Java EE 6 Web Profile
in a minimalist fashion.
The
simple, all-Apache stack is both incredibly light and fully
embeddable, making it ideal for testing and usage in today's
evolution of the enterprise Cloud, where the key to scalability is
hundreds of tiny servers, as opposed to the traditional definition of
how large your servers. Apache TomEE boasts groundbreaking
performance in the following areas:
-
Size: exceptionally small (about 24MB for the entire Web profile),
consumes very little resources;
-
Memory: TCK (Technology Compatibility Kit) passed with no additional
memory settings beyond the default – a first in Java EE; and
-
Speed: runs exceptionally fast in embedded mode:
start/deploy/test/undeploy/stop in 2-3 seconds.
"No
longer do developers have to ask 'Do we use Tomcat or Java EE?' at
the start of a project, as has been the case for the last 10 years,"
explained Blevins. "These two camps have historically been
separate, and certification is a major step in unifying these
communities. With TomEE, developers can now retire untested legacy
stacks and use a reliable product that doesn't deviate from the
Tomcat that they know and love."
Blevins
and members of the Apache OpenEJB community will be presenting
several sessions, including "TomEE – Tomcat with a Kick",
in the "Servers/Tomcat & Geronimo" track at ApacheCon,
7-11 November 2011, in Vancouver, Canada. To register,
visit http://apachecon.com/
Availability
and OversightApache
TomEE software is released under the Apache License v2.0, and is
overseen by the Apache OpenEJB Project Management Committee (PMC)
that guides the Project's day-to-day operations, community
development, and product releases. Apache TomEE is certified on
Amazon EC2 t1.micro, m1.small, and m1.large 32bit images;
certification on 64bit EC2 images and other Cloud platforms are in
the Project's future plans. Those Cloud vendors wishing to donate
resources for TomEE to be certified on their platforms are encouraged
to contact the Apache OpenEJB Project for information on how to
participate. Apache TomEE source code, documentation, mailing lists,
and related resources are available at http://openejb.apache.org/.
About
The Apache Software Foundation (ASF)Established
in 1999, the all-volunteer Foundation oversees nearly one hundred
fifty leading Open Source projects, including Apache HTTP Server --
the world's most popular Web server software. Through the ASF's
meritocratic process known as "The Apache Way," more than
350 individual Members and 3,000 Committers successfully collaborate
to develop freely available enterprise-grade software, benefiting
millions of users worldwide: thousands of software solutions are
distributed under the Apache License; and the community actively
participates in ASF mailing lists, mentoring initiatives, and
ApacheCon, the Foundation's official user conference, trainings, and
expo. The ASF is a US 501(3)(c) not-for-profit charity, funded by
individual donations and corporate sponsors including AMD, Basis
Technology, Cloudera, Facebook, Google, HP, Hortonworks, IBM, Matt
Mullenweg, Microsoft, PSW Group, SpringSource/VMware, and Yahoo!. For
more information, visit http://www.apache.org/.
"Apache",
"Apache OpenEJB", and "Apache TomEE" are
trademarks of The Apache Software Foundation. All other brands and
trademarks are the property of their respective owners.
#
# # [Less]
|
Posted
over 12 years
ago
by
Sally
WHO: The Apache Software Foundation (ASF). Apache powers half the Internet, terabytes of data, teraflops of operations, billions of objects, and enhances the lives of countless users and developers. Established in 1999 to shepherd, develop, and
... [More]
incubate Open Source innovations "The Apache Way", the ASF oversees 150+ projects led by a volunteer community of over 350 individual Members and 3,000 Committers across six continents.
WHAT: ApacheCon, the ASF's official conference, trainings, and expo. This year's theme is "Open Enterprise Solutions, Cloud Computing, and Community Leadership", featuring dozens of sessions that help accelerate success in the use, development, and deployment of Apache projects across the Open Source ecosystem.
ApacheCon keynotes will be presented by:
- David A. Wheeler, noted Open Source expert and author of "Why Open Source Software / Free Software? Look at the Numbers!" and "Nearly all FLOSS is Commercial";
- Eric Baldeschwieler, CEO of Hortonworks, and the former VP Hadoop Software Engineering at Yahoo!; and
- David Boloker, Distinguished Engineer and CTO of IBM's Emerging Internet Technology group.
Dozens of recognized experts on Apache technologies --from Axis2 to Zookeeper-- comprise ApacheCon's speakers and training faculty. They include representatives from: The Apache Software Foundation, Adobe Systems, Akamai, Cloudera, LinkedIn, NASA Jet Propulsion Laboratory, Nokia, Red Hat, SpringSource, Yahoo!, and many more.
WHEN: ApacheCon 7-11 NovemberWHERE: The Westin Bayshore Vancouver, British Columbia, Canada
KEY HIGHLIGHTS:
- Monday and Tuesday, 7-8 November – Pre-conference
Trainings, Apache Hackathon, BarCampApache, Apache Project MeetUps
- Wednesday, 9 November – Conference Day 1
Keynote Presentation: David A. Wheeler –9.30AM PT
Session Tracks: Apache In Space!, Business, Data handling & analytics (Lucene & friends), Innovation & Emerging Technologies, Servers (Tomcat and Geronimo), Fast Feather Talks
Events: Welcome Reception, Apache Project MeetUps
- Thursday, 10 November – Conference Day 2
Keynote Presentation: Eric Baldeschweiler –1.30PM PT
Session Tracks: Community, Content Technologies, Data handling – Big Data, Enterprise Java, Innovation & Emerging Technologies, Servers (HTTP)
Events: Lightning Talks, Apache Project MeetUps
- Friday, 11 November – Conference Day 3
Keynote Presentation: David Boloker –11.30AM PT
Session Tracks: Content Technologies, Data handling & analytics (Lucene & friends), Infrastructure & DevOps, Innovation & Emerging Technologies, Modular Java
To view the complete ApacheCon schedule, visit http://apachecon.com/.
ApacheCon Sponsors include: Cloudera at the Platinum level; HP, Hortonworks, and HotWax Media at the Gold level; FuseSource and Vmware at the Silver level; and Facebook at the Bronze level. Exhibitors include: AppDynamics, FuseSource, Hortonworks, Ning, Vmware, and WSO2.
Community Sponsors include: The Apache Software Foundation, CrowdVine, Jahia, and Oracle. Media Partners include: ADMIN Magazine, The Bitsource, Conferencevault, DZone, FeatherCast, The Linux Gazette, Linux Pro Magazine, OSCON, Ostatic, and ReadWriteWeb.
About The Apache Software Foundation (ASF)Established in 1999, the all-volunteer Foundation oversees nearly one hundred fifty leading Open Source projects, including Apache HTTP Server -- the world's most popular Web server software. Through the ASF's meritocratic process known as "The Apache Way," more than 350 individual Members and 3,000 Committers successfully collaborate to develop freely available enterprise-grade software, benefiting millions of users worldwide: thousands of software solutions are distributed under the Apache License; and the community actively participates in ASF mailing lists, mentoring initiatives, and ApacheCon, the Foundation's official user conference, trainings, and expo. The ASF is a US 501(3)(c) not-for-profit charity, funded by individual donations and corporate sponsors including AMD, Basis Technology, Cloudera, Facebook, Google, HP, Hortonworks, IBM, Matt Mullenweg, Microsoft, PSW Group, SpringSource/VMware, and Yahoo!. For more information, visit http://www.apache.org/.
"Apache" and "ApacheCon" are trademarks of The Apache Software Foundation. All other brands and trademarks are the property of their respective owners.
# # # [Less]
|
Posted
over 12 years
ago
by
Sally
Powers
smart search and indexing solutions for AOL, Apple, Comcast, Disney,
IBM, LinkedIn, Twitter, Wikipedia, and more.
Forest
Hill, MD – 27 September 2011 –
The Apache Software Foundation (ASF), the all-volunteer developers,
stewards, and
... [More]
incubators of nearly 150 Open Source projects and
initiatives, today announced the 10th anniversary of Apache Lucene.
The
Lucene information retrieval software was first developed in 1997,
entered the ASF as a sub-project of the Apache Jakarta project in
2001, and became a standalone, Top-Level Project (TLP) in 2005.
Apache Top-Level Projects and their communities demonstrate that they
are well-governed under the Foundation’s meritocratic,
consensus-driven process and principles.
"Ten
years ago, Apache provided Lucene a home where it could build a solid
community. Today we can see the fruit of that community, both through
the wide breadth of Lucene-based applications deployed, and through
the depth of improvements to Lucene made in the past decade,"
said Doug Cutting, ASF Chairman and original Lucene creator.
Apache
Lucene powers smart search and indexing for eCommerce, financial
services, business intelligence, travel, social networking,
libraries, publishing, government, and defense solutions.
"Lucene
has changed the world by opening doors that didn't exist before it
arrived on the Open Source scene,” said ASF Member and Apache
Lucene Committer Erik Hatcher. “Lucene has massively disrupted the
enterprise/proprietary search market, with wide adoption around the
globe in every industry.”
Highly
performant, Apache Lucene is in use across an array of applications,
from mobile to Internet scale, and powers enterprise-grade search
solutions for AOL, Apple, IBM (including its artificial
intelligence-driven supercomputer Watson), LinkedIn, Netflix,
Wikipedia, Zappos, and many other global organizations.
"When
it arrived to ASF, Lucene immediately made a huge impact --Lucene was
one of those technologies that made a whole generation of businesses
possible-- it was fast, easy to use, free, and had a growing
community of users and developers. Apache Lucene can be found in an
amazing number of products and services we all know and use, as well
as in products and services we have never heard of,” said ASF
Member and Apache Lucene Committer Otis Gospodnetic.
"While
it's been six years since I joined the Lucene community, the last two
were certainly the most exciting,” said Simon Willnauer, Vice
President of Apache Lucene.
Current
Apache Lucene sub-projects are PyLucene and Open Relevance; other
sub-projects, including Droids, Lucene.Net, and Lucy, have spun out
of the project and are undergoing further development in the Apache
Incubator with the intention of becoming standalone TLPs. Solr, the
high-speed Open Source enterprise search platform, has merged into
the Lucene project itself, whilst former Lucene sub-projects Hadoop,
Mahout, Nutch, and Tika have all successfully graduated as autonomous
Apache Hadoop, Apache Mahout, Apache Nutch, and Apache Tika TLPs.
Originally
written in Java, Apache Lucene is available in many programming
languages such as Perl, C#, C++, PHP, Python, and Ruby. “Now, 10
years later, Apache Lucene is backed by a large community of users,
contributors and developers with incredible energy poured into Lucene
every hour of every day of the year," said Gospodnetic, who is
also co-author of Lucene in Action, and founder of Sematext
International.
“Even
after 10 years, it seems this blazing community and codebase hasn't
reached its potential yet,” added Willnauer. “I'm proud to be
part of this community and look forward to another decade of Open
Source Search."
Hatcher,
who is also co-author of Lucene in Action and co-founder of Lucid
Imagination, added, “if you need search (and you do!), Lucene is
the best core technology choice."
Hatcher,
Willnauer, and other members of the Apache Lucene community will be
presenting sessions on data handling and analytics –a.k.a. “Lucene
and Friends”-- including what's upcoming in Apache Lucene 4.0 (with
performance improvements up to 20,000% from previous versions and
more) at ApacheCon, 7-11 November 2011, in Vancouver, Canada. To
register, visit http://apachecon.com/
Availability
and Oversight
Apache
Lucene software is released under the Apache License v2.0, and is
overseen by a self-selected team of active contributors to the
project. A Project Management Committee (PMC) guides the Project’s
day-to-day operations, including community development and product
releases. Apache Lucene source code, documentation, mailing lists,
and related resources are available at http://lucene.apache.org/.
About The Apache Software Foundation (ASF)
Established
in 1999, the all-volunteer Foundation oversees nearly one hundred
fifty leading Open Source projects, including Apache HTTP Server --
the world's most popular Web server software. Through the ASF's
meritocratic process known as "The Apache Way," more than
350 individual Members and 3,000 Committers successfully collaborate
to develop freely available enterprise-grade software, benefiting
millions of users worldwide: thousands of software solutions are
distributed under the Apache License; and the community actively
participates in ASF mailing lists, mentoring initiatives, and
ApacheCon, the Foundation's official user conference, trainings, and
expo. The ASF is a US 501(3)(c) not-for-profit charity, funded by
individual donations and corporate sponsors including AMD, Basis
Technology, Cloudera, Facebook, Google, HP, Hortonworks, IBM, Matt
Mullenweg, Microsoft, PSW Group, SpringSource/VMware, and Yahoo!. For
more information, visit http://www.apache.org/.
"Apache"
and “Apache Lucene” are trademarks of The Apache Software
Foundation. All other brands and trademarks are the property of their
respective owners.
#
# # [Less]
|