Posted
over 10 years
ago
Here's my new contribution to the Daala demo effort. Perceptual Vector Quantization has been one of the core ideas in Daala, so it was time for me to explain how it works. The details involve lots of maths, but hopefully this demo will make the
... [More]
general idea clear enough. I promise that the equations in the top banner are the only ones you will see!
Read more! [Less]
|
Posted
over 10 years
ago
by
Andreas
Yesterday I was at Cisco’s Collaboration Summit where Cisco’s CTO for Collaboration Jonathan Rosenberg and I showed Cisco’s new WebRTC-based Project Squared collaboration service running in Firefox, talking to a Cisco Collaboration Desktop endpoint
... [More]
without requiring transcoding.
This demo is the culmination of a year long collaboration between Cisco and Mozilla in the WebRTC space. WebRTC enables voice and video communication directly from within the browser. This means that anyone can build a video conferencing service just using WebRTC and HTML5 standards, without the need for the user to download a plugin or a native application.
Cisco is not only developing WebRTC-based services that run on the Web. They have also joined a growing number of organizations and companies helping Mozilla to build a better Web. Over the last year Cisco has contributed numerous technical improvements to Mozilla’s WebRTC implementation, including support for screen sharing and the H.264 video codec. These features are now shipping in Firefox. We intend to use them in the future in Mozilla’s own Hello communication service that we are bringing to Firefox.
Cisco’s contributions to the Web go beyond just advancing Firefox. For the last three years the IETF, the standards body defining the networking protocols for WebRTC, has been unable to agree on a mandatory video codec for WebRTC, putting ubiquitous interoperability in doubt.
One of the major blockers to coming to a consensus was that H.264 is subject to royalty-bearing patents, which made it problematic for open source projects such as Firefox to deploy it. To break this logjam, Cisco open-sourced its H.264 code base and made it available in plugin form. Any product — not just Firefox — can download the plugin and use it to enable H.264 without paying any royalties.
This collaboration between Mozilla and Cisco enabled Firefox to add support for H.264 in WebRTC, and also played a significant role in the compromise reached at the last IETF meeting to adopt both H.264 and VP8 as mandatory video codecs for WebRTC in browsers. As a result of this compromise, in the future all browsers should match the capabilities already available in Firefox.
Mozilla will continue to work on advancing Firefox and the Web, and we are excited to have strong partners like Cisco who share our commitment to the open Web as a shared technology platform.Filed under: Mozilla Tagged: Mozilla, WebRTC [Less]
|
Posted
over 10 years
ago
by
Adam
I posted to the fundraising.mozilla.org blog today:
http://fundraising.mozilla.org/will-our-latest-donation-form-help-us-raise-more-money-this-year
|
PWFG group suggested two new methods for DOM Element interface. These methods reflect role and name accessibility concepts, and corresponding methods were named as computedRole and computedLabel.I have bunch of issues with the approach I wanted to
... [More]
outline here. Just to keep things in one place.The purpose I've been told that primary reason is a testing propose, but having role and name only is not enough to run UAIG tests or any accessibility automation tool since it would require other accessibility properties.Also they say that it might be used for non accessibility proposes. I realize that semantics, the ARIA adds, can be used by non assistive technologies. In Firefox we have a large number of non AT consumers but we don't have a good idea in most of cases what they are for. So I don't really have the use case, and thus it's hard to say whether accessible role and name only works well for non a11y proposes.Concerning to assistive technologies I think they also need a much larger API.Blowing the DOMAnything useful should require extra accessible properties as I said above. These are accessible description, states, relations, ability to navigate the hierarchy etc. That means sooner or later the Element interface has to be changed to a great extent. Check out AtkObject to get an idea about possible changes.In beginning of times accessibility interfaces was built on top of DOM and later they were turned into full APIs. Now we are faced to backward process, accessibility APIs are getting back to DOM. I'm not sure if that's a good idea because accessibility tasks are something very specific, and accessibility API might be not suitable for common needs of web apps.RestrictionsNot every semantically meaningful piece on the screen has a DOM node, for example, list bullets don't necessary have DOM elements associated with them. So Element based accessibility API is too restrictive to fit the requirements of the assistive technologies.PerformanceLast but not least is performance issue. In most of browsers the accessibility engine is kept separately and it gets running on demand. If accessibility is merged with the DOM then nothing tells the user this method may trigger heavy accessibility computations and make his app slower. Surely the browsers will learn how to get smarter but the approach will have a perf hit either way.What's it going to be then, eh?The idea is to provide a separate accessibility interface. If you like it then it can be done by parts, for example, introduce role and name only for the first round same as the original proposal says. Later you can think of adding all other properties.This idea was welcomed initially, then later it was rejected as being too complex and accessibility centric. But - and that's most important thing - it doesn't have disadvantages the Element approach has. [Less]
|
Posted
over 10 years
ago
by
Andreas
Principle 4 of the Mozilla Manifesto states: Individuals’ security and privacy on the Internet are fundamental and must not be treated as optional.
Unfortunately treating user security as optional is exactly what happens when sites let users connect
... [More]
over insecure HTTP rather than HTTP over TLS (HTTPS). What insecure means here is that your network traffic is totally unprotected and can be read and/or modified by anyone who shares a network with you, including random people sharing Starbucks or airport WiFi.
One of the biggest reasons that web sites don’t deploy TLS is the requirement to get a digital certificate — a cryptographic credential which allows a user’s browser to know it’s talking to the right site and not to an attacker. Certificates are issued by Certificate Authorities (CAs) often using a clumsy and error-prone manual process. A further disincentive to deployment is that most CAs charge a fee for their certificates, which not only prices some people out of the market but also interferes with automatic issuance and renewal.
Mozilla, along with our partners Akamai, Cisco, EFF, and Identrust decided to do something about this situation. Together, we’ve formed a new consortium, the Internet Security Research Group, which is starting Let’s Encrypt, a new certificate authority designed to bring security to everyone. Let’s Encrypt is built around a few key principles:
Free: Certificates will be offered at no cost.
Automatic: Certificates will be issued via a public and published API, allowing Web server software to automatically obtain new certificates at installation time and without manual intervention.
Independent: No piece of infrastructure this important should be controlled by a single company. ISRG, the parent entity of Let’s Encrypt, is governed by a board drawn from industry, academia, and nonprofits, ensuring that it will be operated in the public interest.
Open: Let’s Encrypt will be publishing its source code and protocols, as well as submitting the protocols for standardization so that server software as well as other CAs can take advantage of them.
Let’s Encrypt will be issuing its first real certificates in Q2 2015. In the meantime, we have published some initial protocol drafts along with a demonstration client and server at: https://github.com/letsencrypt/node-acme and https://github.com/letsencrypt/heroku-acme. These are functional today and can be used to issue test certificates.
It’s been a long road getting here and we’re not done yet, but this is an important step towards a world with TLS Everywhere.Filed under: Mozilla [Less]
|
Posted
over 10 years
ago
by
sole
I attended dotJS yesterday where I gave a very short version of past past week’s talk at Full Frontal (18 minutes versus 40).
The conference happened in a theatre and we were asked not to use bright background so I changed my slides to be darker and
... [More]
classier.
It didn’t really go as smoothly as I expected (a kernel panic a bit before the start of the talk, and I got nervous and distracted so I got more nervous and…), but I guess I can’t always WIN! It was fun to speak in French if only one line, though: Je suis très contente d’être parmi vous!–thanks to Thomas for the assistance in coming up with the perfect presentation line, and Guillaume and Sasha for listening to me repeat it until it resembled passable French!
While the video is edited and released, here’s a sample in the form of slides, online and their source code in GitHub.
It was fun to use CSS filters to invert the images so they would not be a big white block on top of a dark background. Yay CSS filters!
.filter-invert {
filter: invert(100%) brightness(2);
}
Also, using them in transitions between slides. I discovered that I could blur between slides. Cinematic effects! (sorta, as I cannot get vertical/horizontal blur). But:
.bespoke-active.emphatic-text {
filter: none;
}
.bespoke-inactive.emphatic-text {
filter: blur(10px);
}
I use my custom plugin presentation-fullscreen for getting real fullscreen in my slides. It’s on npm:
npm install presentation-fullscreen --save
then just
require('presentation-fullscreen');
will add a new option to the contextual menu for making the whole body go fullscreen.
I shall write about this tip and how I use bespoke.js in general, and a couple thoughts and ideas I had during the conference soon. Topics including (so I don’t forget): why a mandatory lack of anonymity is not the solution to doxxing, and the ideal talk length.
[Less]
|
Posted
over 10 years
ago
The Firefox 34 release date will move out one week from Nov 25 to Dec 1/2. This change impacts Firefox Desktop, Firefox for Android, Firefox ESR, and Thunderbird.The purpose of this change is to allow for an additional week of stabilization during
... [More]
the 34 cycle.
Details of the change:
Release date change from Nov 25 to Dec 1/2 (need to determine the date that works best given the work week)
Merge date change from Tue, Nov 24 to Fri, Nov 28
Two additional desktop betas (10 and 11) will be added to the calendar this week on our usual beta build schedule (build Mon and Thu, release Tue and Fri)
One additional mobile beta (beta 11) will be added to the schedule.Note that mobile beta 10 will gtb on schedule on Mon.Mobile beta 11 will gtb on Thu with desktop in order to be ready early the following week.
RC builds will happen on Mon, Nov 24
Note that we are effectively moving an extra week that we had previously added to the 35 Beta cycle in the 34 Beta cycle. 35 will have a 7 week Aurora cycle instead of a 7 week Beta cycle. [Less]
|
Posted
over 10 years
ago
by
patrickfinch
This article may or may not be pay-walled, depending on how you arrive at it. It is an exploration of the shift to apps.
The history of computing is companies trying to use their market power to shut out rivals, even when it’s bad for innovation
... [More]
and the consumer….That doesn’t mean the Web will disappear. Facebook and Google still rely on it to furnish a stream of content that can be accessed from within their apps. But even the Web of documents and news items could go away. Facebook has announced plans to host publishers’ work within Facebook itself, leaving the Web nothing but a curiosity, a relic haunted by hobbyists.
This is something I was getting at with my post yesterday: that advertising remains one of the Web’s unique selling points. It is much more effective as an advertising platform than mobile apps are. At the moment, the Internet giants extract an enormous amount of value from the content on the Web, using it to drive engagement with their services. The Web has very low barriers to entry, but economic sustainability is difficult and the only proven revenue model appears to be advertising at scale. The model needs liberating.
(Note: The source of this article, the Wall Street Journal, may appear to refute that, (given it has a paywall), but I believe that their model is essentially freemium and it isn’t clear to me what revenues they derive from subscription customers.) [Less]
|
Posted
over 10 years
ago
When we run ALTER statements on our big tables we have to plan ahead to keep
from breaking whatever service is using the database. In MySQL, many times* a
simple change to a column (say, from being a short varchar to being a text
field) can
... [More]
read-lock the entire table for however long it takes to make the
change. If you have a service using the table when you begin the query you'll
start eating into your downtime budget.
If you have a large enough site to have database slaves you'll have a
double-whammy - all reads will block on the master altering the table, and then,
by default, the change will be replicated out to your slaves and not only will
they read-lock the table while they alter it, but they will pause any further
replication until the change is done, potentially adding many more hours of
outdated data being returned to your service as the replication catches up.
The good news is, in some situations, we can take advantage of having database
slaves to keep the site at 100% uptime while we make time consuming changes to
the table structure. The notes below assume a single master with multiple
independent slaves (meaning, the slaves aren't replicating to each other).
Firstly, it should go without saying, but the client application needs to
gracefully handle both the existing structure and the anticipated structure.
When you're ready to begin, pull a slave out of rotation and run your alter
statement on it. When it completes, put the slave back into the cluster and let
it catch up on replication. Repeat those steps for each slave. Then failover
one of the slaves as a new master and pull the old master out of rotation and
run the alter statement on it. Once it has finished put it back in the cluster
as a slave. When the replication catches up you can promote it back to the
master and switch the temporary master back to a slave.
At this point you should have the modified table structure everywhere and be
back to your original cluster configuration.
Special thanks to Sheeri who explained how to do all
the above and saved us from temporarily incapacitating our service.
*What changes will lock a table vary depending on the version of MySQL. Look
for "Allows concurrent DML?" in the table on this manual page. [Less]
|
Posted
over 10 years
ago
by
glob
the following changes have been pushed to bugzilla.mozilla.org:
[1096565] GET REST calls should allow arbitrary URL parameters to be passed in addition the values in the path
[1097813] Bug.search causes error when using simple token auth and
... [More]
specifying ‘token’ instead of ‘Bugzilla_token’
[1036802] Requests to the native rest/bzapi endpoints with gzip encoding always result in HTTP/200 responses
[1097382] OS sniffing should detect Windows 10 from “Windows NT 6.4″ instead of detecting Windows NT
[1098956] remove autoland support
[1100368] css concatenation breaks data: urls
discuss these changes on mozilla.tools.bmo.Filed under: bmo, mozilla [Less]
|