I Use This!
Activity Not Available

News

Analyzed 4 months ago. based on code collected 4 months ago.
Posted over 13 years ago by pctony
Folks, The infrastructure team are pleased to announce the availability of id.apache.org the new password management tool for all ASF committers and members. This new service will allow users to: Reset forgotten LDAP passwords themselves ... [More] , no need to contact the Infra team anymore. The ability to change their LDAP password. The ability to update your LDAP record, i.e. change forename, surname or mail attributes. [1]. Users should note that this service will only allow you to manage your LDAP password, thus controlling access to those resources currently protected by LDAP authnz. Once logged in you will note that some fields are not editable, this is by design and are there merely to show you your LDAP entry. You are currently only allowed to edit your Surname, Given name (Forename), and Mail attributes. This list may be extended as we make more features available, and they will be announced as and when. Users of this service should note that we have a few small bugs to iron out, and this will be done as soon as possible. For example if you attempt to modify your details and do no re-enter your password you will currently see a generic HTTP 500 error. Thanks must go to Ian Boston (ieb), and Daniel Shahaf (danielsh) for making this work. Ian provided the initial code (his first ever attempt at Python too). Daniel then took it and implemented several changes and generally improved the backend. [1] - It should be noted that updating your mail record in LDAP will not currently have any affect on where your apache.org email is forwarded on too. This is planned to take place later this year. [Less]
Posted over 13 years ago by markt
The Apache Tomcat team is pleased to announce the release of Apache Tomcat 7.0.6, the first stable release in the Tomcat 7 series. Apache Tomcat 7.0.6 contains further performance improvements in session management, a new binary distribution ... [More] targeted at users embedding Tomcat in other applications and several enhancements to the memory leak detection and prevention features. The Tomcat 7.0.6 release also contains around 60 bug fixes compared to 7.0.5. Apache Tomcat 7.0.6 can be downloaded from the Tomcat 7 download page. The full change log is available in the Tomcat documentation. [Less]
Posted over 13 years ago by markt
The Apache Tomcat team is pleased to announce the release of Apache Tomcat 6.0.30. This is primarily a security and bug-fix release with one moderate security vulnerability addressed and around 100 bug fixes and enhancements. Apache Tomcat 6.0.30 ... [More] can be downloaded from the Tomcat 6 download page. The full changelog is available in the Tomcat documentation. [Less]
Posted over 13 years ago by Anne Kathrine Petterøe
Introduction While many projects have been using Apache Maven as widely adopted and recognized project management tool, Scala-based Simple Build Tool (SBT) became very popular dependency management tool recently. SBT has following advantages: Buld ... [More] file is written in Scala language which is much more concise comparing to verbose XML and provides full power of Scala platform and libraries SBT intelligently tracks changes in source code to make accurate recompilation SBT console mode keeps scalac in resident, which really improves compile times on subsequent runs. This is important for scalac, which is quite slow as compared to javac SBT supports continious compilation and testing While SBT is based on Apache Ivy, also very popular dependency management tool, it is possible to use both remote and local Maven repository with SBT SBT has Custom Actions That said SBT integration with popular IDEs and CI tools is not as good as Maven integration yet (I'll show an example of SBT and IDEA integration and give links to resources related to Hudson integration). It is possible to build ESME project both with Maven and SBT. At the moment Maven is main build tool. SBT might be used locally by developers to improve performance. Subject of this article is building project with SBT. Installation and configuration 1. Download SBT jar file and place it into local directory 2. Create script to launch SBT and specify path to SBT jar. For example, below is a script for Linux platform: java -XX: CMSClassUnloadingEnabled -XX:MaxPermSize=256m -Xmx512M -Xss2M -jar `dirname $0`/sbt-launch-0.7.4.jar "$@" *JVM options above were added to launch script to avoid frequent OutOfMemory errors. Now it is possible to launch SBT. But before proceeding to project structure, it would be useful to know how SBT manages its dependencies. Dependency management As noted before, SBT is based on Apache Ivy — popular dependency management tool. It means that all dependencies will be downloaded to local repository (located by default in $USER_HOME/.ivy2 folder) and orginized in a manner similar to Maven: organization/module/artifact/revision, where organization and revision are analogs to Maven's groupId and version accordingly. All automatically managed dependencies, specified in project definition file, are placed into PROJECT_ROOT/lib_managed directory. Project structure Overall project structure has following form: --->ESME root | --->server | --->project | ---->build | | | -----EsmeProject.scala | ---->plugins | | | -----Plugins.scala | ----- build.properties SBT build artifacts have only been added to ESME server module, therefore they are located under server/project folder. These artifacts include: ESMEProject.scala — project definition file containing build configuration Plugins.scala — plugins for project definition are declared in this file build.properties — contains project name, version, organization, Scala and SBT versions and other custom properties The most important file among these is project file, so lets review it step-by-step. Project File Project definition class for web project EsmeProject should extend DefaultWebProject (which in turn implements base class for all types of projects sbt.Project): class EsmeProject(info: ProjectInfo) extends DefaultWebProject(info) Dependency versions are defined as vals: val liftVersion = "2.2-RC6" val compassVersion = "2.1.1" val luceneVersion = "2.4.0" It is possible to tell SBT to search dependencies in local Maven repository. To do that it's neccessary to define mavenLocal val and assign it to the path in the local filesystem where Maven repository resides: val mavenLocal = "Local Maven Repository" at "file://" Path.userHome "/.m2/repository" All additional repositories are also defined as vals. To include Scala Tools Snaphots repository predefined constant ScalaToolsSnapshots is used, for other repos explicit URL is specified: val scalatoolsSnapshot = ScalaToolsSnapshots val compassRepo = "Compass Repository" at "http://repo.compass-project.org" val twitterRepo = "Twitter Repository" at "http://maven.twttr.com" Project might contain additional files like licenses and notices. To include them in target jar it is neccessary to define method extraResources and override mainResources method, as shown below (assuming that both files are located in project's root directory) : def extraResources = "LICENSE" "NOTICE" override def mainResources = super.mainResources extraResources It's possible to specify dependency definition with Ivy configuration file inline (for example to include or exclude dependent modules). To do that, it is neccessary to override ivyXML method: override def ivyXML = <dependencies> <dependency org="net.lag" name="configgy" rev="2.0.1"> <exclude org="org.scala-tools" module="vscaladoc"/> </dependency> <dependency org="com.twitter" name="ostrich" rev="2.3.2"> <exclude org="org.scala-tools" module="vscaladoc"/> </dependency> </dependencies> SBT manages dependencies based on dependency expressions in project's definition file. Dependency expressions have following form: groupID % artifactID % revision % configuration In case dependency was build with SBT, it is neccessary to change first % to %%: groupID %% artifactID % revision % configuration That way SBT knows how to download correct jar corresponding to Scala version which is used to build project (for example lift-webkit_2.8.1-2.2-RC6.jar). It is possible to use range to specify version like in Maven. For example, value [6.1.6,) corresponds to all versions greater or equal to 6.1.6. Configuration has form A->B, where configuration A use a dependencys configuration B. ESME project definition contains compile and test configurations. As an example, expression "junit" % "junit" % "4.4" % "test->default" says that ESME test configuration uses JUnit default configuration. If no configuration mapping is specified explicitly, compile->default mapping is used by default. One way to list all dependencies is to define libraryDependencies method containing Set of dependency expressions: override def libraryDependencies = Set( "net.liftweb" %% "lift-webkit" % liftVersion % "compile->default", "net.liftweb" %% "lift-mapper" % liftVersion % "compile->default", ... "org.compass-project" % "compass" % compassVersion % "compile->default", "org.apache.lucene" % "lucene-core" % luceneVersion % "compile->default", ... "org.mortbay.jetty" % "jetty" % "[6.1.6,)" % "test->default", "junit" % "junit" % "4.4" % "test->default", ... ) super.libraryDependencies Build properties SBT loader reads the versions of Scala and sbt to use to build the project from the project/build.properties file. If this is the first time the project has been built with those versions, the loader downloads the appropriate versions to project/boot directory. The sbt loader will then load the right version of the main sbt builder using the right version of Scala. Other user-defined properties might also be set in this file. Below is an example of build properties file for ESME: project.organization=Apache Software Foundation project.name=Apache Enterprise Social Messaging Environment (ESME) sbt.version=0.7.4 project.version=1.2 def.scala.version=2.8.1 build.scala.versions=2.8.1 project.initialize=false Plugins All plugins for SBT are specified in plugins definition file Plugins.scala. It's structure is very similar to project definition file. Lets review plugins configuration via example of sbt-idea plugin which is used to generate project artifacts for IntelliJ IDEA. Plugins class should extend PluginDefinition base class: class Plugins(info : ProjectInfo) extends PluginDefinition(info) Additional repositories which are used by plugins are declared as vals: val sbtIdeaRepo = "sbt-idea-repo" at "http://mpeltonen.github.com/maven/" Like in a project definition file, dependencies for plugins are specified via dependency expressions: val sbtIdea = "com.github.mpeltonen" % "sbt-idea-plugin" % "0.1.0" With sbt-idea plugin configured, it is possible to issue commands: sbt update sbt idea and IDEA project file will be generated in project's root directory. Main commands All interaction with SBT is performed via commands, which are usually executed in SBT console. To enter into SBT console, it's necessary to run sbt in a project directory. Clean First group of commands is used to clean generated and downloaded artifacts. clean command deletes target directory where all generated files are located. clean-cache command deletes local Ivy repository where downloaded artifacts and metadata for automatically managed dependencies for this user are resides. clean-lib command deletes lib_managed - managed library directory for this project. Update update command is used to resolve and download all external dependencies into local Ivy repository. Compile compile command preforms compilation of all Scala source files located in src/main/scala folder. It's possible to specify additional options for compile task by overriding compileOptions method. Test test command executes all tests that have been found during compilation. Runs test-compile command first (similar to compile task it's possible to specify additional options for test-compile task by overriding method testCompileOptions). Jetty jetty-run command starts the Jetty server and serves this project as a web application on http://localhost:8080 by default. This variant of starting Jetty is intended to be run from the interactive prompt. jetty-stop command stops the Jetty server that was started with the jetty-run action. Dependencies list Sometimes it's necessary to analyze all tree of used dependencies (to prevent conflicts between them for example). Maven has specific plugin dependency:tree to do just that. It's also possible to perform similar task with SBT via Project Console. console-project command starts the Scala interpreter with access to project and to sbt. For example, to get list of dependencies in compile classpath, current.publicClasspath command should be executed. Similar commands current.testClasspath and current.optionalClasspath exist for test classpath and optional classpath accordingly. Integration with Hudson While integration of SBT with Hudson, popular Continious Integration tool, hasn't been used in ESME project yet, there are some resources on this available: Hudson SBT plugin: https://github.com/hudson/sbt-plugin Christoph Henkelmann’s Blog entry - “How to build sbt projects with Hudson and integrate test results” : http://henkelmann.eu/2010/11/14/sbt_hudson_with_test_integration Links Simple Build Tool home: http://code.google.com/p/simple-build-tool/ Apache Ivy home: http://ant.apache.org/ivy/ Lift Wiki page related to SBT: http://www.assembla.com/wiki/show/liftweb/Using_SBT Mikko Peltonen's sbt-idea plugin home: https://github.com/mpeltonen/sbt-idea A big thank you goes out to Vladimir Ivanov for the SBT implementation and for writing up this blog post explaining how it works! [Less]
Posted over 13 years ago by Sally
Highly-scalable Open Source Distributed Database for Handling Large Amounts of Data is a Key Component in Cloud Computing Forest Hill, MD – 11 January 2011 – The Apache Software Foundation (ASF), the all-volunteer developers, stewards ... [More] , and incubators of nearly 150 Open Source projects and initiatives, today announced Apache Cassandra v0.7, the highly-scalable, second generation Open Source distributed database. "Apache Cassandra is a key component in cloud computing and other applications that deal with massive amounts of data and high query volumes," said Jonathan Ellis, Vice President of Apache Cassandra. "It is particularly successful in powering large web sites with sharp growth rates." Apache Cassandra is successfully deployed at organizations with active data sets and large server clusters, including Cisco, Cloudkick, Digg, Facebook, Rackspace, and Twitter. The largest Cassandra cluster to date contains over 400 machines. "Running any large website is a constant race between scaling your user base and scaling your infrastructure to support it," said David King, Lead Developer at Reddit. "Our traffic more than tripled this year, and the transparent scalability afforded to us by Apache Cassandra is in large part what allowed us to do it on our limited resources. Cassandra v0.7 represents the real-life operations lessons learned from installations like ours and provides further features like column expiration that allow us to scale even more of our infrastructure." Among the new features in Apache Cassandra v0.7 are: - Secondary Indexes, an expressive, efficient way to query data through node-local storage on the client side; - Large Row Support, up to two billion columns per row; - Online Schema Changes – automated online schema changes from the client API allow adding and modifying object definitions without requiring a cluster restart. Oversight and Availability Apache Cassandra is available under the Apache Software License v2.0, and is overseen by a Project Management Committee (PMC), who guide its day-to-day operations, including community development and product releases. Apache Cassandra v0.7 downloads, documentation, and related resources are available at http://cassandra.apache.org/. About The Apache Software Foundation (ASF) Established in 1999, the all-volunteer Foundation oversees nearly one hundred fifty leading Open Source projects, including Apache HTTP Server — the world's most popular Web server software. Through the ASF's meritocratic process known as "The Apache Way," more than 300 individual Members and 2,500 Committers successfully collaborate to develop freely available enterprise-grade software, benefiting millions of users worldwide: thousands of software solutions are distributed under the Apache License; and the community actively participates in ASF mailing lists, mentoring initiatives, and ApacheCon, the Foundation's official user conference, trainings, and expo. The ASF is funded by individual donations and corporate sponsors including AMD, Basis Technology, Cloudera, Facebook, Google, IBM, HP, Matt Mullenweg, Microsoft, SpringSource, and Yahoo!. For more information, visit http://www.apache.org/. # # # [Less]
Posted over 13 years ago by Sally
Open Source middleware for managing, unifying, and archiving data used in critical scientific applications, including NASA Jet Propulsion Laboratory, National Cancer Institute, and Children's Hospital Los Angeles Virtual Pediatric Intensive Care Unit ... [More] The Apache Software Foundation (ASF), the all-volunteer developers, stewards, and incubators of nearly 150 Open Source projects and initiatives, today announced that Apache OODT (Object-Oriented Data Technology) has graduated from the Apache Incubator to become a Top-Level Project (TLP), signifying that the Project's community and products have been well-governed under the ASF's meritocratic, consensus-driven process and principles. Apache OODT is "middleware for metadata" (and vice versa), used for computer processing workflow, hardware and file management, information integration, and linking databases. The OODT architecture allows distributed computing and data resources to be searchable and utilized by any end user. Originally developed in 1998 by Daniel Crichton at NASA's Jet Propulsion Laboratory to build a national framework for data sharing, OODT was quickly applied to other areas in physical science, medical research, and ground data systems. Early implementations include the National Cancer Institute’s Early Detection Research Network, as well as several programs at NASA, including the NASA Planetary Data System, SeaWINDS QuikSCAT project, the OCO/Atmospheric Carbon Observations from Space project, the joint NASA/DOD/NOAA NPOESS Preparatory Project, and the Soil Moisture Active Passive mission testbed. In addition, Apache OODT is also used in a number of research and technology tasks spanning astrophysics, radio astronomy, and climate change research. Apache OODT is also currently supporting research and data analysis within the pediatric intensive care domain in collaboration with Children's Hospital Los Angeles (CHLA) and its Laura P. and Leland K. Whittier Virtual Pediatric Intensive Care Unit (VPICU). "OODT had been successfully operating within the JPL for years; the time had come to realize the benefits of open source in the broader, external community," said Chris Mattmann, Vice President of Apache OODT. "Bringing new developer talents is integral in enhancing the quality of the OODT code, and making OODT available as an Apache project was an ideal way to introduce new features and capabilities." OODT is the first NASA-developed software package to become an ASF TLP (OODT was submitted to the Apache Incubator in January 2010). Projects incubating at the ASF benefit from hands-on mentoring from other Apache contributors, as well as the Foundation’s widely-emulated process, stewardship, outreach, support, and community events. "The Apache Software Foundation has a long history of software innovation through collaboration -- the larger the pool of potential contributors, the more innovation we see," said Mattmann. "The Apache model and the Incubation process provided great guidance. We received solid mentoring, infrastructure, and development support from the Apache Software Foundation." Oversight and Availability All Apache products are released under the Apache Software Licence v2.0, and are overseen by a self-selected team of active contributors to the project. Upon a Project's maturity to a TLP, a Project Mangement Committee (PMC) is formed to guide its day-to-day operations, including community development and product releases. "We are working on improvements to our initial, v0.1 Apache release," added Mattmann. "Our dozen-strong team of contributors are developing new components to more reliably and accurately extract metadata from science datasets." Apache OODT downloads, documentation, and related resources are available at http://oodt.apache.org/. About the Apache Incubator and Incubation Process The Apache Incubator is the entry path for projects and codebases wishing to become part of the efforts at The Apache Software Foundation. All code donations from external organisations and existing external projects wishing to join the ASF enter through the Incubator to: 1) ensure all donations are in accordance with the ASF legal standards; and 2) develop new communities that adhere to our guiding principles. Incubation is required of all newly accepted projects until a further review indicates that the infrastructure, communications, and decision making process have stabilized in a manner consistent with other successful ASF projects. While incubation status is not necessarily a reflection of the completeness or stability of the code, it does indicate that the project has yet to be fully endorsed by the ASF. For more information, visit http://incubator.apache.org/. About The Apache Software Foundation (ASF) Established in 1999, the all-volunteer Foundation oversees nearly one hundred fifty leading Open Source projects, including Apache HTTP Server — the world's most popular Web server software. Through the ASF's meritocratic process known as "The Apache Way," more than 300 individual Members and 2,500 Committers successfully collaborate to develop freely available enterprise-grade software, benefiting millions of users worldwide: thousands of software solutions are distributed under the Apache License; and the community actively participates in ASF mailing lists, mentoring initiatives, and ApacheCon, the Foundation's official user conference, trainings, and expo. The ASF is funded by individual donations and corporate sponsors including AMD, Basis Technology, Cloudera, Facebook, Google, IBM, HP, Matt Mullenweg, Microsoft, SpringSource, and Yahoo!. For more information, visit http://www.apache.org/. # # # MEDIA CONTACT:Sally KhudairiThe Apache Software [email protected]+1 617 921 8656  [Less]
Posted over 13 years ago by Sally
In light of some recent press releases and blog posts from WANdisco ([1], [2], [3]), the Apache Subversion project would like to clarify that WANdisco, Inc participates in Subversion development under the same terms as any other organization. ... [More] Those wishing to verify this may prefer to use the project's public mailing lists and change logs as primary sources, rather than WANdisco's press releases. Below are some of our specific concerns. We look forward to WANdisco's continued participation in improving Subversion, and emphasize that WANdisco's corporate statements do not reflect on our valued developers who happen to be employed there. WANdisco CEO David Richards claims without evidence that bogus changes are being committed to the master tree. He wrote: "We ... believe it's unhelpful when certain unscrupulous committers decide to commit trivial changes in large files to simply get their stats up. That behavior has no place in any open source project; it's a bad form [sic] and wastes everyone's valuable time." We are unaware of any such behavior among the Subversion maintainers. Our repository logs are always open for public inspection, yet when asked to show evidence, Richards refused. The first part of [1] would indicate to most readers that WANdisco was involved in the creation of Subversion; only if the reader were to persist for another six paragraphs would they finally encounter a disclaimer to the contrary. [3] has similar problems. WANdisco was not involved in the creation of Subversion. The Subversion open source project was started in 2000 by CollabNet, Inc. WANdisco's involvement started in 2008, when it began employing Subversion committers, all of whom had prior history with the project. Subversion became part of the Apache Software Foundation in 2009. (CollabNet continues to participate in Subversion development to this day, on the same terms as all the other individuals and companies who undertake or fund development work.) The Subversion development team is already working towards the enhancements that WANdisco inexplicably portrays ([2], [3]) as bold, controversial steps that must be pushed through in the face of (conveniently unnamed) opposition. WANdisco participates in Subversion development along with many parties, and the Subversion project has always welcomed WANdisco's contributions. However, WANdisco alleges that some entities want to impede technical enhancements; at the same time, WANdisco also implies that it is the corporate leader of the project. Neither is true. Since WANdisco does not cite any sources for their specific claims, we cannot explain them. However, a bedrock condition of participation in Apache Subversion is that an individual contributor can have discussions, submit patches, review patches, and so forth, but that companies do not have a formal role. Instead, companies get involved by funding individuals to work on the project. WANdisco's false implication that it is in some kind of steering position in Subversion development discredits the efforts of other contributors and companies. In conclusion, we reiterate that we welcome WANdisco's involvement in Subversion, and failure on WANdisco's part to address the above concerns will have no effect on the acceptance of technical work funded by WANdisco. We simply felt it necessary to clarify WANdisco's role in Apache Subversion, for the benefit of our users and potential contributors. # # #  [Less]
Posted over 13 years ago by rgardler
Who should host their projects on Apache Extras?  Apache Extras is aimed primarily those who are unable or unwilling to licence their code under the Apache License V2, but want to signal their relationship to one or more Apache project ... [More] community. One example of this is my own Drupal connector for Apache Wookie (Incubating). This needs to be GPL licensed due to the Drupal dependency, but it contains Apache Licensed code as well. Consequently it cannot be hosted at Drupal, nor can it be hosted at the ASF. Now, with Apache Extras, it has a home that is associated with at least one of those organisations. A second group of projects that may choose to host on Apache Extras are those who wish to manage their projects in a way that is not aligned with our own collaborative consensus based processes. [Less]
Posted over 13 years ago by pctony
As of approximately 03:00 (UTC) today the infrastructure team have enabled a password policy for all LDAP accounts. This policy has been implemented at the LDAP infrastructure level and will affect all users. It has been deployed using OpenLDAP's ... [More] password policy schema, and overlay. At the time of launch we will be enforcing the following policy. At the time of a given users 10th successive login failure the account will be locked. The account will then be automatically unlocked 24 hours later, or until a member of root@ unlocks it for you. If the user successfully completes a login before the tally reaches 10, the counter for failed logins is reset back to 0. We are enabling this to try and prevent any brute force attempt at guessing passwords. It will also highlight potential issues with accounts. As with all account related queries, you should be contacting root@ - We will be able to unlock your account for you, allowing you to gain access. [Less]
Posted over 13 years ago by rgardler
All Apache projects use the same pragmatic software license, the Apache License v2. However, we recognise that there are other FOSS licences out there, some of which are incompatible with the Apache License v2. Code under these licenses cannot be ... [More] hosted on Apache servers, but people sometimes choose to, or are required to, use them. Until the launch of apache-extras these projects had no a single home to go to. They were spread across all of the various hosting platforms. This made it difficult for communities to cluster around related technologies. For us this was a problem since we believe strong collaborative communities are the key to successful open source software. Apache Extras provides these projects with a way to clearly signal their relationship to one or more Apache project community. This will help them attract developer communities to their own projects. [Less]