273
I Use This!
Activity Not Available

News

Posted over 12 years ago
 The openSUSE Education team is proud to announce the release of updated openSUSE Edu Li-f-e – Linux for Education. A Linux distribution that provides parents, students, teachers as well as IT admins running labs at educational institutes with ... [More] education and development resources for their needs. Edu Li-f-e is based on openSUSE 11.4. This new version has all packages available from the official update repository which contains many bug and security fixes. There are also some packages with version updates from the Open Build Service and a few new feature additions as well. New features include, but are not limited to: KDE version 4.7, GeoGebra, a geometry package providing for both graphical and algebraic input. The chromium browser from the open-source project behind the Google Chrome browser. And Blender, the cross platform suite for 3D creation with ffmpeg and luxrender support. Check out the complete list of packages here. LTSP prebuilt image is also updated, it now uses openSUSE 11.4 with all available updates, upstream LTSP packages are also updated to the latest snapshots. See this howto guide for setting up LTSP on openSUSE Edu. For more information see KIWI-LTSP portal page. The ISO image can be used to create a live DVD or a bootable USB stick. If you intend to install it for everyday use, you need round about 20GB of disk space, the install will take about 10GB. More information about this image including known issues and their workarounds can be found here. For enhancement requests, bugs or just saying hello to us, see “Communicate” here. And don’t forget to have a lot of fun… Hosted at sourceforge.net Direct Download | md5sum Hosted at opensuse-education.org Direct Download | new metalink | old metalink | md5sum | torrent Use download manager or Metalink client such as aria2c for most efficient way to download. [Less]
Posted over 12 years ago
During the Desktop Summit in Berlin, we had a session in which we had a good look at how KDE’s release team performs, which points we can improve on, how, and who will implement these changes. For example: Make our output more predictable and thus ... [More] downstreams’ lives easier Reduce the risk of something going wrong, the bus number problem Make our work more efficient Let’s take a step back first, though. A bit of history: from Coolo & Dirk to TWG to Release Team In KDE 3, Coolo and Dirk managed KDE’s releases. When KDE4 was started, we also wanted to promote this in a more suitable fashion, we started doing more proactive PR work, and generally improved our communication to our users (potential or not). In 2005, the Technical Working Group was conceived, which was meant to handle the releases of KDE4. The Technical Working Group didn’t quite work out since it didn’t bring the structural changes we needed in order to release KDE4. In late 2006 we we decided that it didn’t work, disbanded the TWG and conceived the release team, a self-organized team of volunteers. Conceptually, the release team started as a place for people to go who wanted to make releases happen. We found module maintainers for our released modules who would coordinate towards the app teams and, and a few core people who know how all that stuff works to put things together. We got our release process on the rails again. Interestingly, the release team is very informal, we never ended up appointing anyone release manager, or end-responsible — it has also never been necessary, it seems. Lesson here: shared responsibility works well for us.A quick unscientific count of the whole KDE 4 series (including pre-releases) brought me to 88 releases done by the release team in its current form. Git migration and tarball layout During the migration of our source code repositories to Git, we had a bit of trouble keeping up with splitting of our released modules. This has caused a bit of grief among some packagers, but we’ve sat down together a few times and worked out good solutions for these problems. The Git migration meant that we’d split out repositories such as kdegraphics, but shipping a different set of tarballs also means that packagers have to adjust their scripts, spec files and so on. The Git migration was spread out over a period, meaning that we basically ended up with shifting tarball layouts across patch-level releases, which unintentionally added a bit of work across those releases. Thanks to the feedback from packagers (especially Slackware has helped us a lot there), we were able to address these things, before and during the desktop summit and have become much more aware that the tarball layout is for us essentially a public interface which should be kept stable, and not change without good reasons and communication to those who are affected by it. This will be done as changes to the release scripts, which combine git repos into tarballs. Release Team BoF As I mentioned, during Desktop Summit, we used the opportunity to talk about these problems and find solutions to them in a BoF (Birds-of-a-feather) session. About 12 people got together to work on this, including downstream participants such as Slackware, openSUSE, Gentoo, MeeGo, Balsam and people from the release team, build system knowledgables and GNOME’s release team. A group with different angles but a shared goal. I think collaboration has come a long way, seeing that we share processes and experiences across many groups.The BoF session was kicked off by a discussion wether we want to change our release rythm. People were pretty happy with our time-based, 6-monthly releases, so we quickly decided to keep these. We thought about reducing the number of patch-level releases, which might make sense since we’re in much calmer water than when we started with this rythm at 4.0. There are — unsurprisingly a lot less critical bugfixes now than back in 2008. Our users seem to be really happy with our cadence, so we decided to reduce the work with these releases a bit more, but to keep our monthly updates — no changes there. Release Team Tweaks So what did we come up with? The following list of items should give you a good idea of that, but please keep in mind that we haven’t nailed everything down completely, and that it’s a bit of work on progress. The things left to do aren’t that horrible, so I suppose I won’t take long until we have adopted these tweaks to our processes, and that the results become visible. I’ll put this onto the Wiki once I’ve dealt with the post Desktop Summit lack of sleep. :) Human backup strategy:: We’ve checked with each other who jumps in. Once in a while we’ll have the backup person take over a release, so we actually make sure that we not only think we’re safe, but that we actually are. We’ve started approaching people who we think have the responsibility, trust and skills to backup the release team. More parts to be automated: Script more parts of the release notes,our patch-level releases still involve “monkey work” to a certain degree. This involves some easy changes to the PHP scripts will help us we attempt to maintain consistency across release notes. More predictable tarball layout: The tarball layout will be kept in place for 4.x, Frameworks 5.0 will obviously bring changes to the tarball layouts, which will happen as a one-time change. More rythm to the community by communicating more often where we are in our cycle. This will most likely happen in the form of reminders to email lists such as kde-core-devel, release-team, kde-devel. Promo Team does release notes: The promo team is the right place to create release material, Carl offered to coordinate this on the kde-promo side. I think that these points bring a nice set of improvements to our current release process and prepare the way to the eventual release of the KDE Frameworks 5 — we came up with a strategy to facilitate the modularization in the structure of especially our development frameworks. One that works for us and our downstreams. [Less]
Posted over 12 years ago
tl;dr; -stable kernel releases stay the same this proposal is how we pick the -longterm releases -longterm kernels will be picked every year, and maintained for 2 years before being dropped. the same Documentation/stablekernelrules.txt will apply ... [More] for -longterm kernels, as before. History: 2.6.16 became a "longterm" kernel because my day job (at SUSE) picked the 2.6.16 kernel for its "enterprise" release and it made things a lot easier for me to keep working at applying bugfixes and other stable patches to it to make my job simpler (applying a known-good bunch of patches in one stable update was easier than a set of smaller patches that were only tested by a smaller group of people.) Seeing that this worked well, a cabal of developers got together at a few different Linux conferences and determined that based on their future distro release cycles, we could all aim for standardizing on the 2.6.32 kernel, saving us all time and energy in the long run. We turned around and planted the proper seeds within the different organizations and low-and-behold, project managers figured that this was their idea and sold it to the rest of the groups and made it happen. Right now all of the major "enterprise" and "stable" distro releases are based on the 2.6.32 kernel, making this trial a huge success. Last year, two different community members (Andi and Paul) asked me if they could maintain the 2.6.34 and 2.6.35 kernels as -longterm kernel releases as their companies needed this type of support. I agreed, and they have done a great job at this. Andi reports that the 2.6.35 kernel is being used by a number of different distros, but they will be phased out as their support lifetime expires. There are also a number of embedded users of the kernel as well as some individual ones. So that -longterm kernel is having a lot of benefit for a wide range of users. Today: Now that 2.6.32 is over a year and a half, and the enterprise distros are off doing their thing with their multi-year upgrade cycles, there's no real need from the distros for a new longterm kernel release. But it turns out that the distros are not the only user of the kernel, other groups and companies have been approaching me over the past year, asking how they could pick the next longterm kernel, or what the process is in determining this. To keep this all out in the open, let's figure out what to do here. Consumer devices have a 1-2 year lifespan, and want and need the experience of the kernel community maintaining their "base" kernel for them. There is no real "enterprise" embedded distro out there from what I can see. montaVista and WindRiver have some offerings in this area, but they are not that widely used and are usually more "deep embedded". There's also talk that the CELF group and Linaro are wanting to do something on a "longterm" basis, and are fishing around for how to properly handle this with the community to share the workload. Android also is another huge player here, upgrading their kernel every major release, and they could use the support of a longterm kernel as well. Proposal: Here's a first cut at a proposal, let me know if you like it, hate it, would work for you and your company, or not at all: a new -longterm kernel is picked every year. a -longterm kernel is maintained for 2 years and then dropped. -stable kernels keep the same schedule that they have been (dropping the last one after a new release happens.) These releases are best for products that require new hardware updates (desktop distros, community distros, fast-moving embedded distros (like Yocto)). the normal -stable rules apply to these -longterm kernels as described in Documentation/stablekernelrules.txt This means that there are 2 -longterm kernels being maintained at the same time, and one -stable kernel. I'm volunteering to do this work, as it's pretty much what I'm doing today anyway, and I have all of the scripts and workflow down. Public Notifications: The current kernel.org site doesn't properly show what is and is not being maintained as a -stable and -longterm kernel. I have a proposal for how to fix this involving 'git notes', I just need to sit down and do the work with the kernel.org admins to get this running properly. Thoughts? Feel free to comment on the google+ thread about this, or on the lkml thread. [Less]
Posted over 12 years ago
Up early; drove into Saint Junien with Ron to meet with their (Assemblies of God) fellowship. A familiar setup, if not a familiar language: Blanc, plus blanc que neige, La vé dans le sang de l'Agneau, Je se rai plus blanc que la neige ... [More] (words), and unfamiliar bisou culture. Left after communion - I had always thought that the typical Baptist approach of having lots of little cups was to avoid problems - since they typically use a non-alcoholic wine. Interested to see the combination of real vin rouge with many small cups. Much driving, to a very cheap, but (exciting for children) hotel north of Paris, succumbed to the 'McDo' (with childrens playground) opposite, sigh. Bed early. [Less]
Posted over 12 years ago
I've just released GMime 2.5.10 which I hope to be the last of the 2.5 releases before I release 2.6.0. I feel that I've stretched the development of 2.6.0 out for far too long (2.5 development began at the end of April, 2009) and even though I ... [More] didn't get around to doing everything I had hoped to do, I feel that the latest 2.5.x releases are such an improvement over 2.4.x that I just want to get it out there for developers to start using. But before I make a 2.6.0 release, I'm hoping to get some feedback and some testing. What's new? New for the release of 2.5.10 is GMimePartIter which replaces the need for g_mime_object_foreach()and its awkward callback requirement, instead allowing you to take the far nicer iterator approach that is popular in the C# and Java worlds (known as IEnumerator in C#). This new iterator, like the foreach function it replaces, iterates over the MIME tree structure in depth-first order. Inspired by IMAP's FETCH body part-specifier syntax, I've implemented a method allowing you to jump to a part based on a part-specifier string (aka a path): g_mime_part_iter_jump_to(). Also implemented is a function called g_mime_part_iter_get_path(), which can be used to tell you the current part-specifier path of the iterator. For example, if you had the following MIME message structure: multipart/related multipart/alternative text/plain text/html image/jpeg The body part-specifier paths would be: 1 multipart/alternative 1.1 text/plain 1.2 text/html 2 image/jpeg This means that g_mime_part_iter_jump_to(iter, "1.2") would jump to the part specified by the path "1.2" which, as we can see above, would be the text/html part. Calling g_mime_part_iter_next(iter) would iterate to the next part, being the image/jpeg, while calling g_mime_part_iter_prev(iter) would iterate backwards to the text/plain part and calling it again would iterate backwards to the multipart/alternative. What I Need From Testers My feeling is that developers will want to use this cool new body part-specifier path functionality for aiding them in implementing IMAP servers and/or clients. Because of this, it would be great if GMime's implementation matched IMAP's specification exactly. The problem is that I don't have the time or energy to verify that the paths work out to be identical in all cases. So... if you are one of those developers who is interested in using this functionality and need it to be identical to IMAP's syntax (or would really like it to be), I'm hoping that you could test it out and make sure that it matches. Especially worthwhile of testing, I'd imagine, is having message/rfc822 parts in the tree. I suspect that, if anywhere, this is where differences may be. If body part-specifier paths aren't something you care about, don't fret; the rest of the iterator API needs testing as well and if you have no interest in the iterator API at all, perhaps you'd be willing to test the S/MIME functionality (especially since I haven't figured out how to test it myself, given that I don't have an S/MIME cert nor have I figured out how to generate one or add one to my gpgsm keyring). Your help will be greatly appreciated. [Less]
Posted over 12 years ago
Too bad g+ does not provide RSS/RDF/Atom feeds :/Would have made it a lot easier to integrate into existing infrastructure such as planet aggregators.Giving some hosted application a run, but the sources are not available. Well, and not even the ... [More] application itself. Not something we can realistically rely on. And HTML scraping is too fragile as I suspect the markup will still change a lot in a near future.Disappointing really. Hopefully addressed soon. [Less]
Posted over 12 years ago
Started fiddling around with an idea that (more or less) randomly popped up on IRC when I was briefly discussing packaging of java libraries.The thing is, there is already a vast amount of metadata information around, namely in the Maven Central ... [More] repository (http://repo1.maven.org/maven2/). Yes, I know, it's not complete, quite a vast amount of .pom files have nonsense in it, but still...So I thought, hmm.. how about a script that generates .spec files (to build RPMs) from Maven .pom files (they're XML files with metadata, including about dependencies).Mind you, it does NOT build from source! That is quite a huge undertaking and requires jumping through an almost endless amount of hoops, because of many reasons I won't talk about right now.What it does is simply to pull the "binary" (bytecode) jar file from Maven Central, optionally the source jar (which is extremely useful in IDEs such as Eclipse, Netbeans, ...) and simply installs it into %{_javadir}.Yes, I know, many people don't like this and it kind of defeats the purpose of RPM. Or at least quite a few bits about it. No, indeed, we won't be able to patch the sources like that. But to be honest, how often did we have to patch Java libraries that weren't already patched upstream? Quite rarely, actually, except maybe for things like Ant and Maven, which require a significant amount of integration into the distribution. My point is this: I believe it is better to have those "binary" RPMs of some of the gazillions of open source Java libraries/frameworks/applications that exist out there than to have almost none of them. Because packaging Java stuff into RPMs is extremely tedious.So here it is, take it or leave it, for what it is:https://gitorious.org/pbleser/pbleser/blobs/master/pom2spec(but I'll obviously continue to explore the idea and hack on it, I'm mostly looking for ideas, comments, opinions right now, especially on what's below)It's a bit rough, it's a prototype, it's in Perl, and we actually need to think a bit about how we want to name the packages, because conventional package names would be highly counterproductive in this case. We need a scheme that allows many versions to be installed in parallel.Personally, I would even go as far as to have one RPM package per upstream version.Possible naming schemes are:* spring-beans (just the artifactId)* spring-beans-2_5_6SEC02 (the artifactId and the version, mangled to be acceptable as an RPM %{NAME}, as the name may not include dots)* org.springframework.spring-beans (${groupId}-${artifactId})* org.springframework.spring-beans-2_5_6SEC02 (${groupId}-${artifactId}-${version}, the latter mangled as explained above)We should definitely also add a few Provides, in any case, e.g.:* Provides: java(org.springframework:spring-beans:2.5.6SEC02)* Provides: java(org.springframework:spring-beans)(the "${groupId}:${artifactId}:${version}" is often used and referred to in Maven, or Gradle, or ...) [Less]
Posted over 12 years ago
Previous: Bitcoin Transaction Speed vs. Double-Spending Note: I do not endorse use of Bitcoin for any illegal activities. Like many, I regret the neccessary tradeoff between freedom, privacy and legitimate legal enforcement. One of the much touted ... [More] advantages of Bitcoin is its anonymity. The full record of all transactions is by the very nature of Bitcoin completely public. However, transactions move coins between a set of incoming addresses and a set of outgoing addresses, and the addresses are not tied to any particular entity; an address may belong to anyone and Bitcoin includes no way to tell. The problem, of course, is that there might be ways to tell outside of Bitcoin, especially if you are legal authority (or good at finding security holes; or a Google user). Let’s say someone gives a donation (from address X) to an illegal free speech organization in Burma, and you want to figure out if it was a Burmese resident – and his address, in that case. The organization uses anonymous donation-specific addresses dnum. It wants to purchase a stock of paper for leaflets delivered to a conspirational address, without tipping off the secret police. Let’s assume the secret police captured a computer of another supporter of dissidents. They retrieved the source address used for donations (address C) and now they look at where the coins went. A complicated bead net of addresses that merge and split coins among each other unrolls. The trouble is if it is possible to identify some of the addresses in the net. By government order, all incoming payments to the paper store are transferred from order-specific address p? to a single police-registered address P. The first order was paid at address p0, the second would be paid at p1. The police can now clearly see the coins from address C, transferred to donation address d0, have been used to pay for paper and they will inquire in the store about customer of the address p0. But they can also see that other money have been used for this purchase as well. The trail goes through addresses X, xb, xa, xx, all unknown to police, but arrives to xx from M, the main coin store address of the local Bitcoin exchange. The authorities inquire at the exchange and get bank account number associated with the outgoing payment. The bank happily provides the address and the police is on its way. Does the bank account owner also own the address xx? And what about xa, xb and X, are these her addresses (used for keeping order in the books or to muddle the tracks) or someone else’s? Whose and why did the money go there? The case is not proven yet, but a person is already questioned at the secret police office and in a totalitarian regime, the burden of proof is effectively on her. The paper paid for at p1 will never arrive, but several black cars will. (The xx-xa-xb-X split can be improved by sending parts of the amount to different addresses, then sending coins between them, and so on, but AFAIK this does not really help anything much as the coins are still being passed within a closed subgraph. Also, the dissident organization might try to avoid mixing independent sources in payments, but this may be difficult if you live from small donations.) So, all is lost and hopeless, back to good old cash? Well, not necessarily. There are various approaches to combat the problem. Solve part of it outside the box. Launder coins abroad. Spend them on legitimate-looking puppet causes while staying sure they either return to you or get sent the right way. Repeat and rinse several times. Preferably include real people with googleable Bitcoin addresses early in this chain to completely dissociate yourself from the coins in the public record. Of course, you need some real-world cooperation and friends worldwide. Similar approaches like dissociation by re-selling goods may be considered, but they seem very cumbersome, error-prone and likely losing money (or needing large investments and much time). This all could become a lot easier when paying with bitcoins in person becomes commonplace. An obvious scheme to muddle the tracks is a mixin service (an escrow service might do this as a side-effect). Many people send in coins, they are all included in a single transaction and this transaction has many outgoing addresses for each user sending in money. It will split large pile of possibly suspect money to many small piles that can be used separately, and in the transaction it will be completely unclear which of the many source addresses is the source of the coins. Repeat few times with different sets of co-mixers and you have pretty much perfectly fogged the origins of the money. It sounds great but it is fraught with many practical issues. Most obviously, you need to trust the service with your money; you send it in, it might arrive back or maybe not (they are scammers, or maybe just have technical trouble or got hacked). It should definitely be set up in some “safe” country. And still, you need to trust it is not logging anything, nor is its ISP, and nor is your country’s Great Firewall (i.e., use tor). Your government in fact secretly running this service would be a masterstroke. And of course, you absolutely need the “many people” part as well, so that you are actually mixing the coins with others. But these “many people” should be mainly honest persons with legal coin targets, there is not that much point in mixing if you are mixing just with crooks. For the same reason, “many honest people” will be reluctant to get into the trouble needlessly and mix with the crooks and you. An escrow service with mixing as a side-effect could alleviate this issue somewhat. But what about a more elegant way? When making a transaction, you can pay a transaction fee, mainly to ensure speedier inclusion in a block. The transaction fee goes to whoever mines the block that includes the transaction. There is no upper bound to a transaction fee. We will spill coins in the fees. The idea is similar to a mixin service; but instead of requiring voluntary participation, we use the txfees and the whole block chain as a mixin platform. In exchange, we concede that we cannot have all the money, but only some large fraction (a third? a half?). We will produce dummy transactions that move all the coins to the txfee, then collect part of it back in block generation payouts. This is completely legitimate and automatic; blocks generated by us and by other solo miners (pools are easily identifiable) are indistinguishable. Some of the money will return as clean laundry while the rest goes to support the Bitcoin system. The practical implementation is simple. First, one needs to choose the ratio. The smaller the ratio, the harder to track the most common destination of the spillage, but the less money you get, obviously. Then, you create many small fee-spilling transactions transactions. (More means better laundered for further use, but take more time to get.) You keep part of them for your own blocks and broadcast the rest for other miners to pick up. You can get very smart about this, scanning the p2p network to broadcast to solo miners preferentially (you will be better masked), do some probabilistic modeling on the blocks you create, and so on. The big catch is being able to produce blocks at sufficient pace. The answer is either making a huge investment and building a massive solo rig, or just renting one! There are good rigs for rent. It is again important to rent abroad and make sure you cover your tracks well when paying rent. Of course, this is more practical for the accepting organization than the individual donator. This proposal is far from perfect, but it should work and I have not seen it mentioned yet. My main point is that there may be more tricks like this and the anonymity discussion might yet get interesting. [Less]
Posted over 12 years ago
We are pleased to announce our new openSUSE Weekly News 188. openSUSE Weekly News openSUSE Weekly News Team 188 Edition Legal Notice This work (compilation) is licenced under Creative Commons ... [More] attribution-ShareAlike 3.0 Unported License. The rights for the compilation itself are copyright by Sascha Manns. Opt-Out: If you are an Author and don’t want to be included in the openSUSE Weekly News, just send a Mail to: <[email protected]>. Copyrights of the referenced articles are owned by original authors or copyright owners. If you want to reuse those articles, ask each original copyright owner which license should be applied. We don’t reprint any Article without a free license, we just introduce it then under the Agreement of the German Copyright Law. If you are an author and want to set your blog under a free License just visit: http://goo.gl/Tw3td We are thanking the whole openSUSE Weekly News Team and the open-slx gmbh for spending time and power into the openSUSE Weekly News. Published: 2011-08-13 Table of Contents Announcements Google Summer of Code Status Updates Distribution Team Reports In the Community Events & Meetings openSUSE for your Ears Communication Contributors Security Updates Kernel Review Tips and Tricks For Desktop Users For Commandline/Script Newbies For Developers and Programmers For System Administrators Planet SUSE openSUSE Forums On the Web Announcements Reports Reviews and Essays Warning! Feedback Credits Acknowledgements Copyrights List of our Licenses Trademarks Translations We are pleased to announce our 188th issue of the openSUSE Weekly News. You can also read this issue in other formats here. Enjoy reading :-) Announcements▼ Important The Articles inside this Section are in full. If you are already knowing the stuff in news.opensuse.org, then you can skip this section through using the TOC. “ Strategy DONE! Almost 2 years ago, at the first openSUSE conference, a discussion started about Strategy. A few months ago a final document was ready and on July 14th 2011, the strategy voting ended. Over 200 of theopenSUSE Members voted, with 90% in favor of the current strategy document. What’s next? Looking back It’s been a long ride, and we’d like to give a short overview of the strategy discussion in the openSUSE community over the last 2 years. The beginning The strategy process was started after the first openSUSE conference, now almost 2 years ago. In that time, quite a number of people have participated in the strategy team: Michael Löffler, Joe Brockmeier, Kurt Garloff, Jan Weber, Pascal Bleser, Andreas Jaeger, Bryen Yunashko, Pavol Rusnak, Jos Poortvliet and Thomas Thym. Of course, many others contributed by commenting on the proposals via mailing plists, forums, blogs and other channels. Meetings Initially, the team met weekly and focused on learning about strategy and how to apply it to a community project like openSUSE. A competition analysis was done, as well as an assessment of strength and weaknesses and an overview of the challenges openSUSE is facing was made. In May 2010 a face to face meeting was held and the team came up with a community statement and three different ambitious and narrow high-level visions that we planned to evolve and combine later.These visions were then presented to the community and we hoped for new scenario’s to come from then. Change Where the team started out quite ambitiously, trying to define new niches and a clear direction, it became visible that the majority of the community lost interest along the way. In November 2010, the team had decided to do an u-turn and “ focus on describing who we are, as a community, instead of finding new ways to go. ” The new goals where to: Highlight the story behind openSUSE Identify what users we target and illustrate what we offer to them, Connect it with the issues that matter most to our community New tools and lots of input Much of the input given by community members throughout the process was looked at again and integrated in a first draft focusing on the target users of openSUSE. With this draft, a new way of discussing the document was introduced: “ co-ment, a pretty awesome commenting tool under the GNU Affero GPL ” Co-ment made it easy to give input on a specific sentence or word and discuss that in a structured manner, and a lot of input came in from the whole openSUSE community, with the second revision, introducing “what openSUSE offers its users”, and the third, “what does openSUSE not do”, each drawing in almost 100 comments on co-ment alone. More responses were gathered and processed from various channels like mailinglists, the openSUSE forumsand many other means. The last posting before the openSUSE conference attempted to shorten the document. At the conference, thestrategy was presented and discussed. This re-invigorated interest in the strategy for some and a new team memberjoined the strategy discussions. Based on the feedback at the conference the document gained some clarity as well as a short introduction. On the 20th of December 2010, the strategy team sent the ‘final’ document to the openSUSE board to facilitate the member voting process. Finishing Due to the busy time before the openSUSE 11.4 release, it took a while for the board to go over the document. Some minor nitpicks arose and the new initiative Tumbleweed was added but after that the board asked Thomas Thym to start the voting. In the end, Jos Poortvliet put a vote onconnect as Thomas was not openSUSE Member yet and the teamannounced the start of the voting. Shortly before the 30th of June, the deadline was extended to have time for a mass-mailing to all openSUSE Members. It had turned out that quite a few hadn’t noticed the strategy voting yet and the Board wanted to give them a chance to provide their input as well. The results So on the 14th, the voting ended with a total of 204 votes, 90% of which were supportive of the strategy document (see table). As the voting page said, this support “ does not mean that you have to fully agree or that it is exactly how you want it – we are a diverse community with many opinions and individual goals. We can never all agree on anything, unless it is so completely vague it doesn’t mean anything. This document is the product of a compromise, but the team feels it does adequately describe who we are and where we want to go. ” It does mean that the openSUSE Membership feels the document adequately describes the openSUSE community and what it does and doesn’t do. “Having the strategy document in place provides the project an anchor to reflect upon when project questions and issues arise”, openSUSE Board Chairman Alan Clark said. “It is a very good reference point for those either new to the project or those wanting to capture a glimpse of what openSUSE is and why one should come join us.” openSUSE – the world’s most flexible and powerful Linux distribution The strategy is a document which describes openSUSE as a community. Our values, goals and the way we work. It starts with the ‘community statement’: “ We are the openSUSE Community – a friendly, welcoming, vibrant, and active community. This includes developers, testers, writers, translators, usability experts, artists, promoters and everybody else who wishes to engage with the project. ” And then summarizes what we do as follows: “ The openSUSE project is a worldwide effort that promotes the use of Linux everywhere. The openSUSE community develops and maintains a packaging and distribution infrastructure which provides the foundation for the world’s most flexible and powerful Linux distribution. Our community works together in an open, transparent and friendly manner as part of the global Free and Open Source Software community. ” After that, the strategy goes into more detail, talking about our target users, our philosophy when it comes to development, our focus on collaboration and the things we don’t do. While reading the strategy, you need to keep two things in mind: it is meant as an internal document – it’s not marketing speak. And it’s not meant to tell anyone what to do or not to do – we are an open community! The future The team noted that the “ strategy is of course not set in stone for eternity although we probably won’t go through this process every year… ” and asked for further feedback in the mass mailing to the membership. Some comments did indeed come in, most notably asking for the ARM port and mobile devices as well as the impact of the openSUSE Foundation. In the future, the strategy documents will surelyrequire some revision. Once somebody in the openSUSE community steps up to do an ARM port, which is likely to attract significant help, the document will have to be revisited to reflect this, just like Tumbleweed resulted in a change. And once a openSUSE Foundation is established it is likely this new organisation will become ‘owner’ of this document. For such changes, the <[email protected]> mailinglist is still open and it will remain so. Obviously a discussion can also be held on the opensuse-project mailing list or in other places! ” Google Summer of Code▲▼ “ Christos Bountalis: A utility for merging configuration / sysconfig files – Week 11 Report A bit late this week’s report. But not without a reason, the last weeks i have been working hard in order to fulfill the initial goals of this project. After lot’s of coding / compiling / testing this week and of course mind storming i can now share with you very good and exciting news. What is done this week: aug_process_trees is finally done!! That means we can now proceed to the final goal that of implementing the merging functions. moved whole code to Augeas version 0.9. Added necessary code and fixed already existing. tree_get_children (fixed) tree_compare_children (re worked) tree_match_combine (added) tree_match_lower_level(added) tree_child_sort_label(fixed) debug_print_treeArray(added) debug_print_treeMatchArray(added) What is to be done: Finish Merging Functions Create First Beta Packages That is all in a few lines, as GSoC is getting closer to the end, the time available for completing the project is getting less. So I better get back to coding… ” “ Marcus Hüwe: osc code cleanup – summary of week 11 Here’s a small summary of the 11th (coding) week. This week I spent most of my time with working on the wc code. DONE: project wc: added commit and update methods lots of wc code refactoring TODO: project wc: commit only specific files for a package instead of the complete package (the package wc class already supports this) (use case: osc ci pkg1/file pkg1/foo pkg2/bar pkg3) convert old working copies to the new format package wc: update: add support to specify stuff like “expand”, “linkrev” etc. project wc: add a revert method (to restore a package wc with state ‘!’) project/package wc: support diff package wc: implement a pull method (does the same as “osc pull”) ” “ Sebastian Oliva: JSON Weekly report Hello, First of all, I’m sorry for the large delay on updating my previous report. I’ve spent most of the time improving the performance of the classes, adding multiple profiles support, and on styling the admin, profile and other pages. I have also done performance testing, having responses averaging 50ms/request response times on a testing server, with 200 concurrent requests. I expect higher performance on a deployment configuration. on the To Do List is: Creating at least an offline submitting client (either GTK or QT). Finishing the administrative pages configurations. Adding authentication on submitting profiles. ” “ Aditya Dani: Test Suite for btrfs – Weekly Update This is a summary of the work that I did in the last week.(Week # 10) The test no. 256 was modified to add support for memory mapped i/o operations. Thus, now in the test the files are created using both direct i/o as well as memory mapped i/o. A new test, test no. 257 was added which handles large file creation and modification of upto 1GB. Both the tests 256 and 257 were modified to handle any snapshot supported generic file system. You can find the latest made changes here. ” “ David John Williams: Entomologist UI Changes – Weekly Report #12 This week I’ve been working at getting Remember the Milk integration finished. Right now Lists can be added to Remember the Milk and I’m in the middle of implementing update/delete functionality. It has been a hard week as I’ve never had to use a web based API before and the Remember the Milk one doesn’t seem like the easiest to get to grips with (its beta form is probably indicative of this). The main time consumer now is getting the update/delete functionality which isn’t an issue with Remember The Milks API but one of trying to find the right solution that is generic enough to be used with other services so I avoid large chunks of code that are specific to a single service. For the final week I will continue working on integrating Remember the Milk,Google Calendar and WebDAV and over the remaining 10 days I will try and get these services fully integrated. The final week of GSoC will hopefully be spent on preparing a 1.0 release and any code addition will be kept to a minimum (with a hope that there won’t be any outside of merging). ” Status Updates▲▼ Distribution Important Links Detailed Bugzilla Report Submitting Bug Reports Bug Reporting FAQ Team Reports Build Service Team Build Service Statistics. Statistics can found at Buildservice KDE Team “ Sebastian Kügler: Plasma Active MeegoExperts has done an interview here at the Desktop Summit in Berlin with Fania and Marco. The video explains concepts and user experience in Plasma Active‘s Contour Shell. Have a look yourself to learn about this next-generation user experience for consumer devices, based on our beloved Free Software stack. See the video on Youtube. ” openFATE Team Top voted Features “ decouple download and installation (Score: 361) Network installation could be improved by running package download and package installation in parallel. ” “ Look at plymouth for splash during boot (Score: 188) I wanted to open a fate feature about this when I first heard of plymouth, but reading http://fedoramagazine.wordpress.com/2008/10/21/interview-fedora-10s-better-startup/ really makes me think we should go this way. Ray’s comment starting with “Every flicker and mode change in the boot process takes away from the whole experience.” is especially interesting. Is it okay to track the “don’t show grub by default” here? ” “ 1-click uninstall (Score: 162) An easy way to remove Software! For example: you installed an application with “1-click install” (which will install all the packages that you need), there should be an easy way (also with 1 click) to remove what you have installed with that 1-click operation… in another words: an “1-click Uninstall” to remove installed software (dependencies and packages included). ” “ Update to GRUB v2 (Score: 145) Every single bug or feature that anyone has developed for GRUB 0.97 has been rejected by the upstream project in favor of using GRUB 2. There has been resisitence in the distribution community to switching boot loaders, but this stalemate isn’t going to go away. The code itself isn’t well written or well maintained. Adding a new feature involves jumping through a lot of hoops that may or may not work even if you manage to work around all the runtime limitations. For example, a fs implementation has a static buffer it can use for memory management. It’s only 32k. For complex file systems, or even a simple journaled file system, we run into problems (like the reiserfs taking forever to load bug) because we don’t have enough memory to do block mapping for the journal so it needs to scan it for every metadata read. (Yeah, really.) (…) ” “ Popularity contest (Score: 107) We need a feedback about packages that are preferred by users and actively used. Debian already has a tool named Popularity contest (popcon) * reusing popcon will give us results that are directly comparable with Debian and Ubuntu * packagers team can take care of the package * we need a configuration dialog in YaST that is visible enough * we need a server infrastructure on opensuse.org. (There are certain privacy issues, see Debian FAQ for details) ” Recently requested features Features newly requested last week. Please vote and/or comment if you get interested. “ Package Installations should be chroot aware! Debian does this already! Prove if package installation or upgrade is done in a chroot. Rationale: Every time I refresh my openSUSE-Tumbleweed installation from a chroot I get this error: Installing: filesystem-12.1-26.1 [error] Installation of filesystem-12.1-26.1 failed: (with –nodeps –force) Error: Subprocess failed. Error: RPM failed: error: unpacking of archive failed on file /proc: cpio: chown failed – Read-only file system Of cause I bind mount /proc read only! In a chroot doing a mount of /proc is recommended for many purposes. For example an installation attempt of gentoo stage will fail without /proc mounted. As this /proc mounting is a standard procedure I prioritize this feature as mandatory … ” “ Provide i386 version of valgrind on amd64 hosts Current status: $ uname -m x86_64 $ valgrind ./a.out valgrind: failed to start tool ‘memcheck’ for platform ‘x86-linux’: No such file or directory ” “ Kmail Backup Recover I would like to have a Backup-button (and Recover) for Kmail (and other programs). It could easily copy all necesary data-files into a backup-folder. Maybe selective backup (or selective recover) could be useful as some of the configurations are no longer desired. ” “ opensuse 12.1 to implement Owncloud support Having Owncloud 2.0 support to: > Create a self-hosted instance > Template for easy web-hosted instance > Upload files from dolphin etc > Use docs/music/video/bookmarks from opensuse Apps Would be awesome! http://news.opensuse.org/2011/06/17/opensuse-and-your-own-cloud/ Can we have it please? ” “ Package Software Center with PackageKit backend As a result of 2011 GSoC – PackageKit backend and AppStream integration for Software Center, software-center and dependencies need to be packaged for testing in openSUSE. The list of dependencies is: - pygobject (master branch, git.gnome.org) - PackageKit – 0.6.16 or newer - software-center itself (pk branch, soon to be master, https://code.launchpad.net/~alexeftimie/software-center/packagekit-backend ) - python-piston-mini (https://code.launchpad.net/piston-mini-client) Other dependencies: - po4a (needed by software-center setup.py) ” Feature Statistics Statistics for openSUSE distribution in openFATE Testing Team “ Larry Finger: Weekly News for August 13 This is a reminder that the Testing Core Team will be holding our 3rd Open Bugs Day on August 21, 2011 from 0:00 to 23:59 UTC, The web page describing the event is http://en.opensuse.org/openSUSE:Open-Bugs-Day. The emphasis will be on finding what open bugs found in 11.4 are still present in 12.1 MS4. Although there are no Media that indicate they are MS4, it is possible to upgrade an MS3 installation to what is the equivalent of MS4. The procedure is to delete all existing repos and select the repos at http://download.opensuse.org/factory/repo/oss http://download.opensuse.org/factory/repo/non-oss http://download.opensuse.org/factory/repo/debug Our next IRC meeting will be at 17:00 UTC, August 15 on Channel #opensuse-testing on the Freenode IRC Network. irc://irc.freenode.net/opensuse-testing. We will discuss our experiences with MS4 and finish the planning for Open Bugs Day. The last one is optional. Once these repos are selected (and enabled), then do ‘sudo zypper dup’. Although the result announces itself as MS3, it really is MS4. ” Translation Team Daily updated translation statistics are available on the openSUSE Localization Portal. Trunk Top-List – Localization Guide In the Community▲▼ Events & Meetings Past August 6-12, 2011 : The Desktop Summit (Berlin, Germany) August 08, 2011 : osc11 Orga Meeting August 09, 2011 : Birthday of openSUSE Project August 10, 2011 : Project Meeting Upcoming August 15, 2011 : osc11 Orga Meeting August 16, 2011 : openSUSE Marketing Team Meeting August 17-19, 2011 : Linuxcon (Vancoever, Canada) August 21, 2011 : Open Bugs Day by Testing Core Team August 22, 2011 : osc11 Orga Meeting You can find more information on other events at: openSUSE News/Events. – Local Events openSUSE for your Ears The openSUSE Weekly News are available as podcast in German. You can hear it or download it on http://saigkill.homelinux.net/podcast. Communication The Mailinglists The openSUSE Forums Contributors openSUSE Connect Security Updates▲▼ To view the security announcements in full, or to receive them as soon as they’re released, refer to the openSUSE Security Announce mailing list. “ openSUSE-SU-2011:0884-1: important: apache2-mod_fcgid: fixed possible stack overflow due to wrong pointer arithmetic (CVE-2010-3872) Table 1. SUSE Security Announcement Package: apache2-mod_fcgid Announcement ID: openSUSE-SU-2011:0884-1 Date: Wed, 10 Aug 2011 13:08:16 0200 (CEST) Affected Products: openSUSE 11.3 Vulnerability Type: A possible stack overflow in apache2-mod_fcgid due to wrong pointer arithmetic has been fixed. CVE-2010-3872 has been assigned to this issue. ” “ openSUSE-SU-2011:0897-1: critical: flash-player Table 2. SUSE Security Announcement Package: flash-player Announcement ID: openSUSE-SU-2011:0897-1 Date: Fri, 12 Aug 2011 05:08:25 0200 (CEST) Affected Products: openSUSE 11.4 openSUSE 11.3 Vulnerability Type: An update that fixes 13 vulnerabilities is now available. It includes one version update. ” “ openSUSE-SU-2011:0902-1: important: ecryptfs-utils: Update to fix various symlink race attacks Table 3. SUSE Security Announcement Package: ecryptfs-utils Announcement ID: openSUSE-SU-2011:0902-1 Date: Fri, 12 Aug 2011 21:08:14 0200 (CEST) Affected Products: openSUSE 11.4 openSUSE 11.3 Vulnerability Type: fixes several security problems ” Kernel Review▲▼ “ h-online/Thorsten Leemhuis: Kernel Log: First release candidate for Linux 3.1 Expected to be released in about two months, the next kernel version will offer optimised virtualisation, add bad block management components to the software RAID code and include an extended Nouveau driver for NVIDIA’s Fermi graphics chips. Several developers have been criticised for their clumsy use of Git in this development cycle. Linus Torvalds has issued the first release candidate of Linux 3.1, closing the merge window of this version, whose final release is expected in late September or early October, 17 days after the release of Linux 3.0. Therefore, the first phase in the Linux development cycle was three days longer than usual. This was caused by the diving holiday Torvalds is currently taking in Hawaii; he is providing an impression of his trip on Google Plus. Kernel development has now entered the stabilising phase, which Torvalds and his co-developers mainly use to fix bugs; no further major changes are usually integrated in this phase, and the most important advancements of Linux 3.1 can therefore already be outlined. For instance, the code for software RAIDs will, on some RAID levels, be able to handle media that contain defective blocks. (…) ” “ Rares Aioanei: kernel weekly news – 13.08.2011 Rares gives his weekly Kernel Review with openSUSE Flavor. ” Tips and Tricks▲▼ For Desktop Users “ MakeUseOf/Yaara Lancet: Organize Your Research With The New Standalone Zotero Everyone in the world of research, and many people outside of it, knows that a good citation manager and PDF organizer is a must if you want to keep track of all the papers you’ve read and accumulated. For the past two years, I’ve been using Zotero to organize my research for articles. Zotero is a Firefox add-on in which you can save citations and links to PDFs and access them at any time. One of Zotero’s biggest cons is that it works only within Firefox. This means you have to be running Firefox and use it to save the articles you find. As long as I was using Firefox, this was not a big issue, but when I switched to Chrome for a while, it became quite cumbersome. Not to speak of the fact that I had to load Firefox every time I wanted to look at my database (which can be a lengthy process at times). This is why I was delighted to find out that Zotero have come up with Standalone Zotero Alpha (Windows, Mac and Linux) and with Chrome and Safari connectors for it. Note that this project is still in alpha, so it is not perfect by any means. These new versions came out in February, so hopefully we’ll see some updates soon. Now let’s see what it can do to organize your research. (…) ” “ Danny Kukawka: How to undelete files from ext3/ext4 Sometimes, especially on the command line, it happens that you delete a file or directory you didn’t really plan to delete. A second after hit enter you realize what you have done, maybe you are fast enough to stop the deletion process and save some files, but in the most cases it’s already to late, at least for some files. If you have no or a too old backup you’re screwed. If you use ext3/ext4 you may be able to recover the file with ext3grep orextundelete with information from the file system journal if the content of the file wasn’t already overwritten by new data. (…) ” “ Techlaze: Top 5 Plasma Widgets for the KDE Desktop With KDE 4.7, the KDE team has managed to create one of the most beautiful desktops out there, and to be honest, it’s even more appealing than Windows 7 or Mac OS X. On the usability front, KDE doesn’t seem to cut corners. Trademark features like Activities and Plasmoids (widgets) are polished to near perfection. Also, since the initial KDE 4 release, a lot of quality community-created widgets and plugins have sprung up, making the KDE workspace more than just an alternative to GNOME 3 or Unity. So, if you’ve just installed KDE on your computer, here are some of the best widgets you can drop on to your desktop and make your friends jealous. (…) ” “ Hubfolio/Matthew Casperson: Copying the contents of a file to the clipboard in Nautilus Hi – my name is Matthew, and I am a copy-and-paste addict. I have no idea how I would use a PC without a clipboard, and when I was on Windows Clipmate was one of my favorite utilities. I have tried over the years to find open source alternatives, but nothing has come close. One script that I did come across recently allows you to copy the contents of files selected in the Nautilus Gnome file browser to the clipboard. This is great for copying things like code snippets and customized email signatures for those applications that don’t natively offer that functionality. (…) ” “ Linux Journal/Bruce Byfield: Printing in Scribus Scribus is designed for quality printing. Unlike a word processor, its output is not meant simply to be good enough for practical use, but to be fine-tuned until it is as close as possible to what you want. For this reason, printing is considerably more complicated in Scribus than in the office applications with which you may be familiar. Fortunately, Scribus usually chooses defaults that fit most cases. It also provides rollover help that advises you on whether you need a setting — although, depending on your version of Scribus, some settings may not be included in this help. Still, once you know the work-flow, printing in Scribus is relatively straight-forward. Many of the options are either specifically for professional-quality printing, or for fixing specific problems. Taking the time to familiarize yourself with the options, gives you the chance to come closer to the perfectionism that is impossible in office applications. (…) ” For Commandline/Script Newbies “ HowtoForge/Christian Schmalfeld: First Steps Of Running Linux Via Terminal Instead Of Desktop This tutorial is supposed to show new Linux users how to handle Linux without having to browse through your desktop to edit files. The core commands to do this are the same on every Linux distribution, however there is a large variety of commands that differ from distribution to distribution, as does the install command. (…) ” For Developers and Programmers “ Python4Kids/Brendan Scott: Using Images in the GUI Gladys leaps over to the tape deck, presses levers and switches. Sound of tape reversing. There is a hum and lights flash on and off. A blurred image of a lady in the street comes up on one of the monitors. In this tutorial we look at using images in a GUI environment. One of the possible attributes which can be assigned to a Label or Button widget is an image. You will surely have seen many buttons with images on them. Every time you see a toolbar in a program, each of the images on it is itself on a button widget. In order to do this tutorial though there are a couple of issues: first, you need to have images in the Python4kids directory we set up several tutorials ago. You also need to start Python from within that directory. second, unless you install something called the Python Imaging Library (PIL), Python can only handle a few limited types of image file. One of the file types which will work is .gif, which we will be using here. This tutorialwill not work with jpeg files. third, you need to have the gif files we’re going to use in your python for kids directory. (…) ” “ Nettuts /Andrew Burgess: Ruby for Newbies: Testing with Rspec Ruby is a one of the most popular languages used on the web. We’re running a Session here on Nettuts that will introduce you to Ruby, as well as the great frameworks and tools that go along with Ruby development. In this episode, you’ll learn about testing your Ruby code with Rspec, one of the best testing libraries in the business. (…) ” For System Administrators “ Wazi/Juliet Kemp: Supercharge Nagios with Plugins Nagios is a great application for monitoring your systems, allowing you to set alert levels and trip actions when those levels are reached. The software uses a plugin-based structure; even the most simple functions (such as check_ssh and check_disk) are plugins. This makes Nagios incredibly flexible; if there’s something you want to monitor, and you can think of a way to write it, you can write a plugin, hook it into Nagios, and start running it. But even better than that: for most things you might want to monitor, someone has already done written the plugin for you. (…) ” “ Houcem Hachicha: Nmap : The Pentester’s One Step Shop to Network Domination Nmap is one of the best security software in the world. It is free and open source. It is actively developed and new features and improvements are added to it on a daily basis. Originally, Nmap is a network portscanner. The tool has then been extended to perform service and OS identification. With the addition of the Nmap Scripting Engine (NSE) back in 2008, Nmap is today capable of performing vulnerability scanning and even exploitation. In this blog, I’ll try to describe some of the Nmap capabilities that can be harnessed in blackbox penetration testing. (…) ” “ Flossstuff/Ankur Aggarwal: Using A File As A Storage Device We all know that file stores our information in many types of formats. But do you know that we can use it as a storage device too. Surprised???? Let’s go through the crazy process :D We are going to create a empty file in Linux, format it and then mount it as if we are mounting a partition. This process is long , So to understand it easily I am dividing it into 4 steps. (…) ” “ IBM developerWorks/Sean A. Walberg: Learn Linux, 302 (Mixed environments): Print services In this article, learn to: Create and configure printer sharing Configure integration between Samba and the Common UNIX® Print System (CUPS) Manage Windows® print drivers, and configure downloads of print drivers Configure the [print$] share Understand security concerns with printer sharing Set up and manage print accounting (…) ” Planet SUSE▲▼ “ Sebastian Kügler: Desktop Summit Thoughts I’ve been to the Desktop Summit in Berlin for the past few days, we’re now around the middle of the event, after the conference, before the workshop and BoF sessions, so I thought I might share some thoughts I’ve gathered in idle moments in the past few days. boredom and diversity Last night, the build system BoF was planned, a team session where we look at the way how we develop our software. I have to admit that to me, this is quite a boring (but nevertheless very important topic). As it also affects the way we release software, I’ve put my release team hat on and joined the session. I was a bit afraid that since it’s not the most sexy topic in the world, that little people would show and we end up with incomplete or broken ways to release the KDE SC, and KDE Frameworks in the future. My worries were ungrounded as quite some people showed up and we made good progress on all the topic we talked about. (If you’re interested what we talked about, keep an eye on the kde-core-devel and kde-buildsystem mailinglists.) What struck me is that in KDE, there’s enough people who feel responsible, even for boring topics. When I shared my (ungrounded) concerns with Stephen Kelly, he looked at me with this empty expression on his face and told me “but that’s exciting, it’s the way we build our software!”, and given his enthusiasm, I believe him (even if I don’t exactly personally share his excitement). Diversity makes us strong. (…) ” “ Matthias Hopf: SUSE graphical benchmark test We at SUSE needed to know whether we had some severe regressions regarding graphics performance during enablement of intel SandyBridge graphics – and it turned out that it was not commonly understood what graphics performance is actually composed of. Some were only interested in core X commands (xterm users :-) , some only in render performance (office users :-) , some in low-core 3D graphics (compiz users :-) , some in hardcore 3D graphics (gamers :-) . So finally I put together a standardized graphical benchmark with aspects for all users. And no, it won’t output a single number, because that would be meaningless for everybody. But it makes it easy to compare different aspects between different graphics cards and drivers, and there are some surprising results. But more about the results later. The sources for the benchmark are now on gitorious, and the Wiki entry describes its usage. It’s currently somewhat tailored to SLE11SP1, so you might run into minor issues when running it on a different OS version. And of course, it’s not very polished yet . ” “ Holger Hetterich: SMB Traffic Analyzer @ openSUSE conference Should you read this article in a blog roll, here is a clue what it is about: a) the SMB Traffic Analyzer project (SMBTA), b) the openSUSE conference, and c) Samba. You guess it, we finally got a slot for a presentation at the wonderful openSUSE conference for SMBTA. To me, it is remarkable to see a project like SMBTA being presented at OSC because it is not really something related to openSUSE. It’s not that SMBTA improves your boot time, or discusses details of the buildservice, or makes your life with the openSUSE distribution better in any way. SMBTA is very likely not even interesting to the casual user, except for some administrators. That said, SMBTA was born inside of the openSUSE infrastructure, growing to a project used on different distributions and operating systems, such as Solaris. And the one thing we can really say is that we exploited all the services that make up openSUSE to the core. We used the openSUSE Buildservice from the beginning, and we use appliances created by SUSE Studio for both demoing and developing SMBTA. With the recent release of Samba 3.6.0, among it’s top changes like full SMB2 support and other major features, it is also prime time for SMBTA. The Virtual File System layer module that supports our current infrastructure is included in this release of the Samba CIFS server and that marks a milestone for our project. SMBTA is already used in production at some sites, and the release of Samba 3.6.0 will hopefully forward this trend. Benjamin Brunner and me will give an introduction talk to SMB Traffic Analyzer at the openSUSE conference and most likely live-demo the software chain. We’ll welcome anyone interested to join our presentation at OSC! ” “ Sebastian Kügler: Board Business Tonight, the new board of directors of KDE e.V. went out for dinner, generously (!) treated by our constituency. It was a nice and relaxed dinner, gave us some good opportunity to brief Lydia (our newest board member) on how we work, boring stuff like where we store our documents, what to expect from our bi-weekly conference calls, what granularity of emailing we found to be productive, and so on. One official thing we always have to do (according to German foundation regulations, so-called “Vereinsrecht”) is appointing roles. Cornelius was volunteered as president, Frank as treasurer, both accepted their new and old responsibilities. I promoted from regular board member to vice president (which really only has a a theoretical meaning). The vote was, as usual a formal thing and we got it done between dumpling 2 and 3 on my plate, it took all of 3 minutes. Serious, effective, yet duely diligent. :P We also used the opportunity to talk about non-board stuff, about our other projects in KDE (we’re also pretty active in the community outside of the board chores), private going ons, random fun things. I came back happy about our team, and looking forward to our work in the coming year. Just right. Earlier this afternoon, we met with the GNOME board. There were also some personal changes in the new GNOME board, I especially enjoyed Ryan Lortie (desrt) having joined the board of directors of the GNOME foundation. I’ve met Ryan at several occasions in the past, and always found that we got a good click, enough differences to keep conversations interesting, but very much one the same line of communication. One of the topics was communications across the boards, and we thought that having some kind of ‘open communication channel’ for situtations which might turn unproductive would be good. Ryan and me volunteered, and we took immediate opportunity and went out for an afternoon drink, which I very much enjoyed. While going to our dinner appointment, I had really two things in mind, love and hate. Not sure why those two words sprang to my mind, but I really hate saying goodbye to the people I love. Even if it’s very much a temporary thing (our meetings in the Plasma team have become pretty frequent, especially with Plasma Active One being on the horizon), having people leave after an intensive week of excellent collaboration always makes me kind of sad. That’s of course just an indication of how much I enjoy working in this excellent team, or maybe just a sign of exhaustion after a week of pushing the Free desktop to the next level with peers who are as passionate about this as I am. Tomorrow in the afternoon, I’ll take a train back home to the Netherlands, and will commence putting our plans (and continuation, tweaks thereof) to action. Exhausted after week of frantic Free software conferencing, but just as energized as if it were my first Akademy. The coming weekend will be used for catching up on sleep, then next weekend, I’ll be at Froscon, where I’ll be presenting Plasma Active. Be there if you want to touch it yourself. :) ” openSUSE Forums▲▼ “ Problems downloading from repos Here’s a nice story, motto “Don’t blame it all on your operating system”. The thread starter reports an issue with downloading packages from the repos, so he cannot update or install anything. Happy ending is included, but in a slightly different way than one would think. ” “ Zypper update – Wrong digest A bit curious, this one. The user gets “wrong digest” errors when he tries to perform a “zypper up” on his newly installed openSUSE 11.4. Interesting thread, no true solution or explanation for the issue yet, but a good example of how others try to analyze the problem and help looking for a way out. ” “ Adobe Flashplayer 11 for 64bit linux It’s been quite quiet on this front for a while, it even looked like Adobe was dropping 64bit support for linux anyway, but here’s a thread about the release of a beta version of Flashplayer 11 in a 64bit version. For those interested in this matter, read it, share your own experiences. ” “ openSUSE Language specific subforums: We now host the following language specific subforums under the umbrella of the openSUSE Forums: Main forums, english 中文(Chinese) Nederlands (Dutch) Français (French) Deutsch (German) Ελληνικό (Greek) Magyar (Hungarian) 日本語 (Japanese) Portuguese Pусский (Russian) ” On the Web▲▼ Announcements “ Samba Team Releases Version 3.6 The Samba Team is proud to announce the release of Samba 3.6, a major new release of the award-winning Free Software file, print and authentication server suite for Microsoft Windows® clients. The First Free Software SMB2 Server Samba 3.6 includes the first Free Software implementation of Microsoft’s new SMB2 file serving protocol. SMB2 within Samba is implemented with a brand new asynchronous server architecture, allowing Samba to display the performance enhancements SMB2 brings to Microsoft networking technology. Samba’s new SMB2 server has been tested by major vendors and has been able to double the performance of some network applications when run in conjunction with Microsoft Windows 7® clients. (…) ” Reports “ itworld/Brian Proffitt: KDE 5.0 roadmap announced The KDE desktop is about to take a major step forward, with the announcement today of the roadmap for KDE Frameworks 5.0. Most eyes in the Linux desktop world are on the Berlin Desktop Summit this week, as members of the GNOME and KDE camps come together for a joint technical conference running from August 6-12 at Humboldt University in Berlin. Currently, KDE seems to be making the most strides in the joint event, with the surprise announcement of the KDE 5.0 roadmap, which was revealed by KDE developer Aaron Seigo in his blog Sunday. According to Seigo, the new KDE Frameworks roadmap, which will encompass “the next major release of KDE’s libraries and runtime requirements,” will have an “emphasis… on modularity, dependency clarity/simplification and increasing quality to the next level.” (…) ” “ ZDNet/Steven J. Vaughan-Nichols: Cisco and Twitter join Linux patent protection pool In case you’ve been under a rock for the last decade, you might not know that today’s technology wars aren’t over who has the best prices, the most features, or the greatest quality. No, in 2011, instead of working on innovating, tech. giants like Apple, Microsoft, and Oracle, are now wasting their resources on intellectual property (IP) lawsuits. So, perhaps it should come as no surprise that networking powerhouse Cisco and social networking force Twitter, is joining the Linux patent protection group, the Open Invention Network (OIN). (…) ” Reviews and Essays “ Wazi/Brian Proffitt: LibreOffice vs. OpenOffice.org: Showdown for Best Open Source Office Suite With the release of a new version of LibreOffice this month, it’s a good time to look at the two major open source office suites, LibreOffice andOpenOffice.org, to see what advantages each offers, and which is a better bet for end users. Both products are suites of office applications, comprising word process, spreadsheet, presentation graphics, database, drawing, and math tools. Both also spring from the same code base. OpenOffice.org was created by a German company called Star Division, which Sun Microsystems bought in 1999. Originally the suite was called StarOffice, and it was popular in the European market as an alternative to Microsoft Office. After picking it up, Sun changed the name of the product to OpenOffice.org and released its code as open source. The product retained some popularity in the enterprise, partly because of its cross-platform capabilities and no-cost license. In 2009, Oracle announced it would be acquiring Sun, and many wondered what would become of OpenOffice.org. When Oracle proved to be less than willing to share its plans for the product, a number ofOpenOffice.org community members opted to fork the OpenOffice.org code. In November 2010, they created LibreOffice, to be managed by a new German non-profit called The Document Foundation. A few months later, Oracle opted to donate the OpenOffice.org project to the Apache Software Foundation, which today maintains OpenOffice.org as a so-called podling project until OpenOffice.org completes the migration process to become fully integrated within the Apache organization. (…) ” “ Charles E. Craig, Jr.: What is the difference between GNOME, KDE, Xfce, and LXDE? In Linux, there are so many choices, and this includes the desktop environments and window managers. Four of the most popular desktop environments in Linux are GNOME, KDE, Xfce, and LXDE. All four offer sophisticated point-and-click graphical user interfaces (GUI) which are on par with the desktop environments found in Windows and Mac OS X. When you ask different people which of these four is best, you will likely get many different answers. So which is the best between GNOME, KDE, Xfce, and LXDE? Well….. it is largely a matter of opinion, and the capabilities of your computer hardware can also be important in deciding. For example, users with older computers will be better served to choose Xfce or especially LXDE, while users with newer hardware can get more desktop effects by choosing GNOME or KDE. My recommendation would be to try all four of these desktop environments and decide for yourself which one works best for you. GNOME, KDE, Xfce, and LXDE are all excellent and to varying degrees each can be customized in a number of ways. My personal favorite is GNOME 2.x which is slowly being replaced by GNOME 3, though (very fortunately) GNOME 2.x is still being kept alive in Linux Mint, Debian, and some other distros. Of the most recent desktop environments, my favorite is the newly-released Xfce 4.8. (…) ” “ Network World/Stephen Spector: Can you be open source and not open source? Most people think of open source projects having the following features in common: Source Code Access Process to Submit Code Changes Process to Submit Bugs Documentation (at varying levels of quality) Ownership of Project Trademark Public Release Schedule Of course, there may be other items that are commonly considered features, however, I want to focus on the second bullet; Process to Submit Code Changes. If an open source project has all of the above features except doesn’t accept code changes, is it no longer an open source project? (…) ” “ Wazi/Carla Schroder: Four Steps Toward a Successful Open Source Project There’s a lot to like about open source software. It can help your business by cutting costs and producing better software. It’s open, auditable, and customizable, and free of the restrictive, invasive licenses and EULAs that infest proprietary software. You can build a community around an open source project, one that incorporates contributions from both staff and outside developers. If you’re wondering how to start up and manage a genuine open source project, here are four fundamental tasks to get you started: start small, build trust and social capital, start smart, and build for the future. (…) ” Warning! “ the telegraph: 10-year-old girl hacker discovers smartphone security flaw A 10-year-old girl has uncovered a security flaw in popular Android and iOS games after becoming “bored” at their slow pace. Figure 1. Farmville is similar to the type of game vulnerable to the attack The young hacker, whose real name has not been disclosed, presented her findings at the Defcon conference in Las Vegas as part of a competition to find the next generation of computer security experts. Going by the hacker name “CyFi”, she found that she could manually advance the clock in unnamed games to avoid waiting for virtual crops to grow. Independent researchers have confirmed the vulnerability. She told CNET : “It was hard to make progress in the game, because it took so long for things to grow. So I thought, ‘Why don’t I just change the time?’” CyFi’s hack also involved circumventing measures within the games designed to catch such cheating. She found that disconnecting the device from WiFi and advancing the clock in small increments avoided detection. It has not been revealed how much control the technique grants an attacker over a device or which games are vulnerable, to allow developers to patch the security flaw. ” Feedback▲▼ Do you have comments on any of the things mentioned in this article? Then head right over to the comment section and let us know! Or if you would like to be part of the openSUSE:Weekly news team then check out our team page and join! If you don’t know, how to contribute, just check out the Contribution Page. We have a Etherpad, which you can also use to sumbit news. Talk with us: Or Communicate with or get help from the wider openSUSE community via IRC, forums, or mailing lists see Communicate. Visit our connect.opensuse.org Page: and give your Feedback. Visit our Facebook Fanpage: Fanpage You also can submit via Bugtracking and Featurerequests for give your Feedback. Keep updated: You can subscribe to the openSUSE Weekly News RSS feed at news.opensuse.org. DOCS: Visit the official openSUSE docs page: docs.opensuse.org. Credits▲▼ We thank for this Issue: Sascha Manns, Editor in Chief Satoru Matsumoto, Editorial Office Gertjan Lettink, Forums Section Thomas Hofstätter, Eventeditor Thomas Schraitle, DocBook-Consultant Acknowledgements▲▼ We thank for this Issue: RenderX XEP, PDF Creation and Rendering SyncRO Soft Ltd., Oxygen XML Editing iJoomla, Surveys Copyrights▲▼ List of our Licenses Permission Information for own Trademarks SUSE ®, openSUSE ®, the openSUSE ® Logo and Novell ® are registered Trademarks of Novell, Inc. Linux ® is a registered Trademark of Linus Torvalds Translations▲ openSUSE Weekly News is translated into many languages. Issue 188 is available in: English Coming soon: Japanese Greek German First published on: http://saigkill.homelinux.net ")); [Less]
Posted over 12 years ago
Hello everyone!I am very pleased to announce the new issue (187) of openSUSE Weekly News in Greek. In this issue you will read about:* Are you ready for RWX³? * 1st Greek openSUSE Collaboration Summer weekend Camp: The Report * Dominik Zajac: Use ... [More] SSH for more secure browsing in public networks * Wazi/Dmitry Kaglik: How To Create an Ebook with OpenOffice.org * Linux.com/Jack Wallen: Advanced Layering Techniques on Linux with GIMPAs well as many interesting news about openSUSE and useful advice, which can make our lives easier.Enough said though… Read more at: http://own.opensuse.gr, http://el.opensuse.org/Weekly_news or www.os-el.grWe are always looking forward to receiving your comments as well as suggestions regarding things you would like to read about in our next issue.The openSUSE Weekly News is being translated in the Greek language from issue #150. You can read older translated issues here: http://el.opensuse.org/Κατηγορία:Weekly_news_issuesEnjoy it! Eftstathios Agrapidis (efagra) [Less]