I Use This!
Very High Activity

News

Analyzed about 5 hours ago. based on code collected about 19 hours ago.
Posted over 3 years ago by Marco Castelluccio
TL;DR: For those of you who prefer an example to words, you can find a complete and simple one at https://github.com/marco-c/rust-code-coverage-sample. Source-based code coverage was recently introduced in Rust. It is more precise than the ... [More] gcov-based coverage, with fewer workarounds needed. Its only drawback is that it makes the profiled program slower than with gcov-based coverage. In this post, I will show you a simple example on how to set up source-based coverage on a Rust project, and how to generate a report using grcov (in a readable format or in a JSON format which can be parsed to generate custom reports or upload results to Coveralls/Codecov). Install requirements First of all, let’s install grcov: cargo install grcov Second, let’s install the llvm-tools Rust component (which grcov will use to parse coverage artifacts): rustup component add llvm-tools-preview At the time of writing, the component is called llvm-tools-preview. It might be renamed to llvm-tools soon. Build Let’s say we have a simple project, where our main.rs is: use std::fmt::Debug; #[derive(Debug)] pub struct Ciao { pub saluto: String, } fn main() { let ciao = Ciao{ saluto: String::from("salve") }; assert!(ciao.saluto == "salve"); } In order to make Rust generate an instrumented binary, we need to use the -Zinstrument-coverage flag (Nightly only for now!): export RUSTFLAGS="-Zinstrument-coverage" Now, build with clang build. The compiled instrumented binary will appear under target/debug/: . ├── Cargo.lock ├── Cargo.toml ├── src │   └── main.rs └── target └── debug └── rust-code-coverage-sample The instrumented binary contains information about the structure of the source file (functions, branches, basic blocks, and so on). Run Now, the instrumented executable can be executed (cargo run, cargo test, or whatever). A new file with the extension ‘profraw’ will be generated. It contains the coverage counters associated with your binary file (how many times a line was executed, how many times a branch was taken, and so on). You can define your own name for the output file (might be necessary in some complex test scenarios like we have on grcov) using the LLVM_PROFILE_FILE environment variable. LLVM_PROFILE_FILE="your_name-%p-%m.profraw" %p (process ID) and %m (binary signature) are useful to make sure each process and each binary has its own file. Your tree will now look like this: . ├── Cargo.lock ├── Cargo.toml ├── default.profraw ├── src │   └── main.rs └── target └── debug └── rust-code-coverage-sample At this point, we just need a way to parse the profraw file and the associated information from the binary. Parse with grcov grcov can be downloaded from GitHub (from the Releases page). Simply execute grcov in the root of your repository, with the --binary-path option pointing to the directory containing your binaries (e.g. ./target/debug). The -t option allows you to specify the output format: “html” for a HTML report; “lcov” for the LCOV format, which you can then translate to a HTML report using genhtml; “coveralls” for a JSON format compatible with Coveralls/Codecov; “coveralls+” for an extension of the former, with addition of function information. There are other formats too. Example: grcov . --binary-path PATH_TO_YOUR_BINARIES_DIRECTORY -s . -t html --branch --ignore-not-existing -o ./coverage/ This is the output: Your browser does not support iframes. This would be the output with gcov-based coverage: Your browser does not support iframes. You can also run grcov outside of your repository, you just need to pass the path to the directory where the profraw files are and the directory where the source is (normally they are the same, but if you have a complex CI setup like we have at Mozilla, they might be totally separate): grcov PATHS_TO_PROFRAW_DIRECTORIES --binary-path PATH_TO_YOUR_BINARIES_DIRECTORY -s PATH_TO_YOUR_SOURCE_CODE -t html --branch --ignore-not-existing -o ./coverage/ grcov has other options too, simply run it with no parameters to list them. In the grcov’s docs, there are also examples on how to integrate code coverage with some CI services. [Less]
Posted over 3 years ago by Mozilla
California is on the move again in the consumer privacy rights space. On Election Day 2020 California voters approved Proposition 24 the California Privacy Rights Act (CPRA). CPRA – commonly called CCPA 2.0 – builds upon the less than two year old ... [More] California Consumer Privacy Act (CCPA) continuing the momentum to put more control over personal data in people’s hands, additional compliance obligations for businesses and creating a new California Protection Agency for regulation and enforcement With federal privacy legislation efforts stagnating during the last years, California continues to set the tone and expectations that lead privacy efforts in the US. Mozilla continues to support data privacy laws that empower people, including the European General Data Protection Regulation (GDPR), California Consumer Privacy Act, (CCPA) and now the California Privacy Rights Act (CPRA). And while CPRA is far from perfect it does expand privacy protections in some important ways. Here’s what you need to know. CPRA includes requirements we foresee as truly beneficial for consumers such as additional rights to control their information, including sensitive personal information, data deletion, correcting inaccurate information, and putting resources in a centralized authority to ensure there is real enforcement of violations. CPRA gives people more rights to opt-out of targeted advertising We are heartened about the significant new right around “cross-context behavior advertising.” At its core, this right allows consumers to exert more control and opt-out of behavioral, targeted advertising — it will no longer matter if the publisher “sells” their data or not. This control is one that Mozilla has been a keen and active supporter of for almost a decade; from our efforts with the Do Not Track mechanism in Firefox, to Enhanced Tracking Protection to our support of the Global Privacy Control experiment. However, this right is not exercised by default–users must take the extra step of opting in to benefit from it. CPRA abolishes “dark patterns” Another protection the CPRA brings is prohibiting the use of “dark patterns” or features of interface design meant to trick users into doing things that they might not want to do, but ultimately benefit the business in question. Dark patterns are used in websites and apps to give the illusion of choice, but in actuality are deliberately designed to deceive people. For instance, how often the privacy preserving options — like opting out of tracking by companies — take multiple clicks, and navigating multiple screens to finally get to the button to opt-out, while the option to accept the tracking is one simple click. This is only one of many types of dark patterns. This behavior fosters distrust in the internet ecosystem and is patently bad for people and the web. And it needs to go. Mozilla also supports federal legislation that has been introduced focused on banning dark patterns. CPRA introduces a new watchdog for privacy protection The CPRA establishes a new data protection authority, the “California Privacy Protection Agency” (CPPA), the first of its kind in the US. This will improve enforcement significantly compared to what the currently responsible CA Attorney General is able to do, with limited capacity and priorities in other fields. The CPRA designates funds to the new agency that are expected to be around $100 million. How the CPRA will be interpreted and enforced will depend significantly on who makes up the five-member board of the new agency, to be created until mid-2021. Two of the board seats (including the chair) will be appointed by Gov. Newsom, one seat will be appointed by the attorney general, another by the Senate Rules Committee, and the fifth by the Speaker of the Assembly, to be filled in about 90 days. CPRA requires companies to collect less data CPRA requires businesses to minimize the collection of personal data (collect the least amount needed) — a principle Mozilla has always fostered internally and externally as core to our values, products and services. While the law doesn’t elaborate how this will be monitored and enforced, we think this principle is a good first step in fostering lean data approaches. However, the CPRA in its current form still puts the responsibility on consumers to opt-out of the sale and retention of personal data. Also, it allows data-processing businesses to create exemptions from the CCPA’s limit on charging consumers differently when they exercise their privacy rights. Both provisions do not correspond to our goal of “privacy as a default”. CPRA becomes effective January 1, 2023 with a look back period to January 2022. Until then, its provisions will need lots of clarification and more details, to be provided by lawmakers and the new Privacy Protection Agency. This will be hard work for many, but we think the hard work is worth the payoff: for consumers and for the internet. The post Four key takeaways to CPRA, California’s latest privacy law appeared first on Open Policy & Advocacy. [Less]
Posted over 3 years ago by Mozilla
California is on the move again in the consumer privacy rights space. On Election Day 2020 California voters approved Proposition 24 the California Privacy Rights Act (CPRA). CPRA – commonly called CCPA 2.0 – builds upon the less than two year old ... [More] California Consumer Privacy Act (CCPA) continuing the momentum to put more control over personal data in people’s hands, additional compliance obligations for businesses and creating a new California Protection Agency for regulation and enforcement With federal privacy legislation efforts stagnating during the last years, California continues to set the tone and expectations that lead privacy efforts in the US. Mozilla continues to support data privacy laws that empower people, including the European General Data Protection Regulation (GDPR), California Consumer Privacy Act, (CCPA) and now the California Privacy Rights Act (CPRA). And while CPRA is far from perfect it does expand privacy protections in some important ways. Here’s what you need to know. CPRA includes requirements we foresee as truly beneficial for consumers such as additional rights to control their information, including sensitive personal information, data deletion, correcting inaccurate information, and putting resources in a centralized authority to ensure there is real enforcement of violations. CPRA gives people more rights to opt-out of targeted advertising We are heartened about the significant new right around “cross-context behavior advertising.” At its core, this right allows consumers to exert more control and opt-out of behavioral, targeted advertising — it will no longer matter if the publisher “sells” their data or not. This control is one that Mozilla has been a keen and active supporter of for almost a decade; from our efforts with the Do Not Track mechanism in Firefox, to Enhanced Tracking Protection to our support of the Global Privacy Control experiment. However, this right is not exercised by default–users must take the extra step of opting in to benefit from it. CPRA abolishes “dark patterns” Another protection the CPRA brings is prohibiting the use of “dark patterns” or features of interface design meant to trick users into doing things that they might not want to do, but ultimately benefit the business in question. Dark patterns are used in websites and apps to give the illusion of choice, but in actuality are deliberately designed to deceive people. For instance, how often the privacy preserving options — like opting out of tracking by companies — take multiple clicks, and navigating multiple screens to finally get to the button to opt-out, while the option to accept the tracking is one simple click. This is only one of many types of dark patterns. This behavior fosters distrust in the internet ecosystem and is patently bad for people and the web. And it needs to go. Mozilla also supports federal legislation that has been introduced focused on banning dark patterns. CPRA introduces a new watchdog for privacy protection The CPRA establishes a new data protection authority, the “California Privacy Protection Agency” (CPPA), the first of its kind in the US. This will improve enforcement significantly compared to what the currently responsible CA Attorney General is able to do, with limited capacity and priorities in other fields. The CPRA designates funds to the new agency that are expected to be around $100 million. How the CPRA will be interpreted and enforced will depend significantly on who makes up the five-member board of the new agency, to be created until mid-2021. Two of the board seats (including the chair) will be appointed by Gov. Newsom, one seat will be appointed by the attorney general, another by the Senate Rules Committee, and the fifth by the Speaker of the Assembly, to be filled in about 90 days. CPRA requires companies to collect less data CPRA requires businesses to minimize the collection of personal data (collect the least amount needed) — a principle Mozilla has always fostered internally and externally as core to our values, products and services. While the law doesn’t elaborate how this will be monitored and enforced, we think this principle is a good first step in fostering lean data approaches. However, the CPRA in its current form still puts the responsibility on consumers to opt-out of the sale and retention of personal data. Also, it allows data-processing businesses to create exemptions from the CCPA’s limit on charging consumers differently when they exercise their privacy rights. Both provisions do not correspond to our goal of “privacy as a default”. CPRA becomes effective January 1, 2023 with a look back period to January 2022. Until then, its provisions will need lots of clarification and more details, to be provided by lawmakers and the new Privacy Protection Agency. This will be hard work for many, but we think the hard work is worth the payoff: for consumers and for the internet. The post Four key takeaways to CPRA, California’s latest privacy law appeared first on Open Policy & Advocacy. [Less]
Posted over 3 years ago by Jorge Villalobos
Here are our highlights of what’s coming up in the Firefox 84 release: You can now zoom extension panels, popups, and sidebars using Ctrl+scroll wheel (Cmd+scroll wheel on macOS). Under certain circumstances, search engine changes weren’t being ... [More] reset when an add-on was uninstalled. This has been fixed now. Manage Optional Permissions in Add-ons Manager As we mentioned last time, users will be able to manage optional permissions of installed extensions from the Firefox Add-ons Manager (about:addons). We recommend that extensions using optional permissions listen for browser.permissions.onAdded and browser.permissions.onRemoved API events. This ensures the extension is aware of the user granting or revoking optional permissions. Thanks We would like to thank Tom Schuster for his contributions to this release. The post Extensions in Firefox 84 appeared first on Mozilla Add-ons Blog. [Less]
Posted over 3 years ago by [email protected] (Robert)
When debugging graphical applications it can be helpful to see what the application had on screen at a given point in time. A while back we added this feature to Pernosco. This is nontrivial because in most record-and-replay debuggers the state of ... [More] the display (e.g., the framebuffer) is not explicitly recorded. In rr for example, a typical application displays content by sending data to an X11 server, but the X11 server is not part of the recording. Pernosco analyzes the data sent to the X11 server and reconstructs the updates to window state. Currently it only works for simple bitmap copies, but that's enough for Firefox, Chrome and many other modern applications, because the more complicated X drawing primitives aren't suitable for those applications and they do their complex drawing internally. Pernosco doesn't just display the screenshots, it helps you debug with them. As shown in the demo, clicking on a screenshot shows a pixel-level zoomed-in view which lets you see the exact channel values in each pixel. Clicking on two screenshots highlights the pixels in them that are different. We know where the image data came from in memory, so when you click on a pixel we can trace the dataflow leading to that pixel and show you the exact moment(s) the pixel value was computed or copied. (These image debugging tools are generic and also available for debugging Firefox test failures.) Try it yourself! Try Pernosco on your own code today! [Less]
Posted over 3 years ago by Alessio Placitelli
(“This Week in Glean” is a series of blog posts that the Glean Team at Mozilla is using to try to communicate better about our work. They could be release notes, documentation, hopes, dreams, or whatever: so long as it is inspired by Glean. You can ... [More] find an index of all TWiG posts online.) We have been working on Glean for a few years now, starting with an SDK with Android support and increasing our SDK platform coverage by implementing our core in Rust and providing language bindings for other platforms, well beyond the mobile space. Before our next major leaps (FOG, Glean.js), we wanted to understand what our internal consumers thought of Glean: what challenges are they facing? Are we serving them well? Disclaimer: I’m not a user researcher, and  I did my best to study (and practice!) how to make sure our team’s and our customers’ time investment would not be wasted, but I might have still gotten things wrong! “Interviewing Users: How to Uncover Compelling Insights” by Steve Portigal really helped me to put things into perspective and get a better understanding of the process! Here’s what I learned from trying to understand how users work with Glean: 1. Rely on user researchers! Humans can ask questions, that’s a fact.. right? Yes, but that’s not sufficient to understand users and how they interact with the product. Getting a sense of what works for them and what doesn’t is much more difficult than asking “What would you like our product to do?”. If your team has access to UX researchers, join efforts and get together to better understand your users 🙂 2. Define who to interview Since I am in the Glean SDK team, we did not interview any of my team peers. Glean has a very diverse user base: product managers, developers, data scientists, data engineers. All of them are using the Glean SDK, Pipeline and Tools in slightly different ways! We decided to select representatives throughout Mozilla for each group. Moreover, we did not necessarily want feedback exclusively from people who already used Glean. Mozilla has a legacy telemetry system in Firefox most of the company was exposed to, so we made sure to include both existing Glean users and prospective Glean users. We narrowed down the list to about 60 candidates, which was then refined to 50 candidates. 3. Logistics is important! Before starting collecting feedback using the interview process, we tried to make sure that everything was in place to provide a smooth experience both for us and the interviewed folks, since all the interviews were performed online: we set up a shared calendar to let interviewers grab interview slots; we set up a template for a note-taking document, attempting to set a consistent style for all the notes; we documented the high level structure of the interview: introduction (5 minutes) + conversation (30 minutes) + conclusions (5 minutes); we set up email templates for inviting people to the interview; it included information about the interview, a link to an anonymous preliminary questionnaire (see the next point), a link to join the video meeting and a private link to the meeting notes. 4. Provide a way to anonymously send feedback We knew that interviewing ~50 folks would have taken time, so we tried to get early feedback by sending an email to the engineering part of the company asking for feedback. By making the questionnaire anonymous, we additionally tried to make folks feel more comfortable in providing honest feedback (and critics!). We allowed participants to flag other participants and left it open during the whole interview process. Some of the highlights were duplicated between the interviews and the questionnaire, but the latter got us a few insights that we were not able to capture in live interviews. 5. Team up! Team support was vital during the whole process, as each interview required at least two people in addition to the interviewed one: the interviewer (the person actually focusing on the conversation) and the note taker. This allowed the interviewer to exclusively pay attention to the conversation, keeping track of the context and digging into aspects of the conversation that they deemed interesting. At the end of each interview, in the last 5 minutes, the note taker would ask any remaining questions or details. 6. Prepare relevant base questions To consistently interview folks, we prepared a set of base questions to use in all the interviews. A few questions from this list would be different depending on the major groups of interviewees we identified: data scientists (both who used and not used Glean), product managers, developers (both who used and not used Glean). We ended up with a list of 10 questions, privileging open-ended questions and avoiding leading or closed questions (well, except for the “Have you ever used Glean” part 🙂 ). 7. Always have post-interview sync ups Due to the split between the note-taker and the interviewer, it was vital for us to have 15 minutes, right after the interview, to fill in missing information in the notes and share the context between the interviewing members. We learned after the initial couple of interviews that the more we waited for the sync up meeting, the more the notes appeared foggy in our brain. 8. Review and work on notes as you go While we had post-interview sync-ups, all the findings from each interview were noted down together with the notes document for that interview. After about 20 interviews, we realized that all the insights needed to be in a single report: that took us about a week to do, at the end of the interviewing cycle, and it would have been much faster to note them down in a structured format after each interview. Well, we learn by making mistakes, right? 9. Publish the results The interviews were a goldmine of discoveries: our assumptions were challenged in many areas, giving us a lot of food for thought. Our findings did not exclusively touch our team! For this reason we decided to create a presentation and disseminate the information about our process and its results in a biweekly meeting in our Data Org. The raw insights have so far been shared with the relevant teams, who are triaging them. 10. Make the insights REALLY actionable Our team is still working on this 🙂 We have received many insights from this process and are figuring out how to guarantee that all of them are considered and don’t fall back in our team’s backlog. We are considering reserving bandwidth to specifically address the most important ones on a monthly basis. Many of the findings already informed our current choices and designs, some of them are changing our priorities and sparking new conversations in our whole organization. We believe that getting user feedback early in the process is vital: part of this concern is eased by our proposal culture (before we dive into development, we asynchronously discuss with all the stakeholders on shared documents with the intent of ironing out the design, together!) but there’s indeed huge value in performing more frequent interviews (maybe on a smaller number of folks) with all the different user groups. [Less]
Posted over 3 years ago by Emma Malysz
Highlights Firefox 83 went out yesterday! Started investigation into making BrowserNotification look more part of chrome to eventually use as a UI for remote messages (in addition to CFR and what’s new, etc) All legacy actors have now been ... [More] removed! Here’s the latest Fission newsletter: Newsletter #9 bytesized is working on downloading a new update while one is already staged, which should address a pain point going back more than a decade. Search Mode and Tab-to-search were just released in Firefox 83! The about:home startup cache has now been enabled by default on nightly We shipped v29 of Firefox for iOS! It comes with Widgets that allow you to bring your favourite sites and open tabs directly to your homescreen Friends of the Firefox team Resolved bugs (excluding employees) Fixed more than one bug Andrey Bienkowski Ben D (:rockingskier) Chris Jackson Cody Welsh Fabien Casters [:vaga] Hunter Jones Itiel Martin Stránský [:stransky] Michael Goossens Niklas Baumgardner Tim Nguyen :ntim Tom Schuster [:evilpie] New contributors (🌟 = first patch) akshay1992kalbhor fixed a problem with crash pings having null bytes after module names Niklas Baumgardner worked on a number of picture in picture bugs, like opening on the correct monitor, saving the last location and size of the window, and making sure the window is visible after changing the resolution Chris Jackson also helped with picture in picture bugs, like removing an experimental toggle, ensuring the description is correct, and fixing the missing padding on the toggle button 🌟 Andrey Bienkowski improved the guidelines for commit naming in devtools and cleaned up devtools code 🌟 Manekenpix fixed the selected nodes in markup view 🌟 Namandude1008 removed an unused caption rep 🌟Nerixyz fixed an issue to show the correct icon next to extension sources 🌟Seunbayo83 moved the object inspector outside of the reps folder Hunter Jones made it possible to have more than one picture-in-picture window and refactored picture in picture to not use a global state Project Updates Add-ons / Web Extensions Addon Manager & about:addons Fixed an issue with built themes disappearing for one sessions after upgrading to Firefox 82 (fixed by Bug 1672314, caught due to the recent changes to the theme resource urls introduced in Bug 1660557) Some minor follow ups related to the the new verified and mozilla badges in the about:addons extensions list (Bug 1666042, Bug 1666503) Itiel contributed an RTL-related followup fix for the optional permissions list part of the about:addons detail view (Bug 1672502, follow up for Bug 1624513) ntim contributed some small refactoring for about:addons (Bug 1676292, Bug 1677530), in preparation for completely removing the remaining bits of the legacy XUL-based about:addons page WebExtensions Framework Landed a fix to make sure that the extension messaging Ports are garbage collected when the related extension content is destroyed (Bug 1652925) Brad Werth made sure that the extension popups and sidebar panels can be zoomed using the Ctrl-scroll wheel as in the browser tabs (Bug 1634556) Mark Banner did make sure we reset/restore the default search engine when an addon did override it and then was uninstalled at early startup (Bug 1643858) WebExtension APIs Tom Schuster extended the browsingData API to support clearing the browsing data for a specific contained tab using a new optional cookieStoreId parameter (Bug 1670811, + follow up fix from Bug 1675643) Bookmarks “Other Bookmarks” Folder in Bookmarks Toolbar – If users have bookmarks stored in Other Bookmarks, a button for it will appear in the bookmarks toolbar (bug). An option to hide this folder from the toolbar is currently in progress (bug). Bookmarks are stored in the Bookmarks Toolbar by default – For new users, the default location for storing bookmarks is now in the bookmarks toolbar (bug). “Import Bookmarks” Button – New profiles will display an “import” button on the bookmarks toolbar (bug). Showing the Bookmarks Toolbar on the New Tab page by default and replacing the Bookmarks Toolbar hide/show toggle – New options for showing the bookmarks toolbar: “Always”, “Never”, and “Only on New Tab” (bug). A message describing the Bookmarks Toolbar and linking to the library is shown on the toolbar when it is blank – If there are any bookmarks in the “Bookmarks Toolbar” folder or any other widgets on the toolbar this message will not be shown (the “Other Bookmarks” symlink folder does not count) Developer Tools Network Panel – Introducing top level error component responsible for catching exceptions and rendering details, stack trace + link for filing bugzilla report (bug) Performance panel – Building simple on-boarding UI for new performance panel (bug) The new profiler panel is based on Firefox profiler: profiler.firefox.com Accessibility Panel – showing tab order on the current page, done by Yura Zenevich (bug), shipped in Firefox 84 DevTools Fission – Making DevTools Fission compatible Fission tests are now enabled on tier 1 (bug) Continue making DevTools Fission compatible (wiki with known issues) The project has 6 MVP remaining to be completed at Dec 14 – Dec 20 Marionette Fission – Making Marionette (the automation driver for Firefox) Fission compatible The project has 13 MVP remaining to be completed at Nov 09 – Nov 22 Enabling Marionette new Fission compatible implementation (based on JSWindowActors) fixed an a memory leak and improved performance 15-20% across all platforms (bug). Installer & Updater mhowell is wrapping up work on the semaphore, to prevent multiple instances from updating each other, and to let the user know when Firefox can’t update as a result agashlin landed a new uninstall ping, so we should get more information about users explicitly leaving Firefox (as opposed to silently ceasing to use Firefox) Lint Sonia enabled all ESLint rules for widget/tests/*.xhtml – these were files where we had postponed fixing all the eslint issues when moving from xul to xhtml. Kris made it so that the ESLint list of services that are accessible via Services.* is now semi-automatically generated. Password Manager Tgiles landed Bug 1613620 – allow to remove/delete all stored logins/passwords PDFs & Printing emalysz updated the print dialog so it stays open if the user cancels choosing a filename for print-to-PDF emalysz updated the error handling so it allows changing the destination and cancelling the print if a setting is invalid emalysz updated the custom margin settings to account for the printed page orientation nordzilla added support for duplex printing (print on both sides) emilio fixed a bug where printing using the system dialog failed for about: pages emalysz fixed a bug where changing the paper size with custom margins set could result in an error without it being displayed to the user Performance bigiri landed a major refactor of the ASRouter code! emalysz is mentoring bugs to help us transition off of OSFile over to IOUtils! Interested contributors are most welcome to pick a bug blocking this meta. mconley landed UserInteractions! These let us add BHR annotations for key user interaction flows, which will hopefully let us identify high-priority responsiveness issues from our BHR data. dthayer and emalysz have been making the pre-XUL skeleton UI for faster startup responsiveness more comprehensive. This includes drawing the URL bar, toolbar buttons, rounded rects and correct theme colours at startup. We hope to enable this by default in Nightly sometime next week once dthayer is back from PTO. This can be turned on by adding a browser.startup.preXulSkeletonUI pref set to true Gijs resolved a tabswitch regression caused by recent bookmarks toolbar work, and is investigating some other Talos regressions also caused by that work mconley fixed a responsiveness Talos regression for the about:newtab page caused by the about:home startup cache work Performance Tools Added shortcuts for call tree transforms. Added a keyboard shortcut panel that is revealed by the shortcut “?”. “Profile Info” panel now includes how many physical and logical CPU cores there are on the profiled machine. Picture-in-Picture We’ve introduced an experimental capability for having multiple concurrent Picture-in-Picture player windows. You can enable it in about:preferences#experimental. We’re very curious about how people might use multiple player windows – here’s a form to let us know if you find multiple player windows useful! Lots of fixes in the past few weeks from our MSU students: Bug 1672401 – PiP description is displayed incorrectly on whereby.com Bug 1671588 – PiP window is not visible if enabled right after changing resolution Bug 1589680 – Make it possible to have more than one Picture-in-Picture window Bug 1545752 – The Picture-in-Picture window opens on main monitor even if the browser is opened on the secondary monitor Bug 1578985 – Picture-in-Picture does not remember location and size of the popout windows Push We are working with Ops and QA to coordinate testing (and setting up a staging server) for a major port of the Push Endpoint logic that was completed this past summer from python to Rust. Kudos to our intern mdrobnak for successfully completing this massive project. We expect to deploy to production in early 2021. Search and Navigation We’re running an holdback experiment to measure the impact of the new search mode feature on Release We’re also working on various experiments related to vertical search in the Address Bar, both with partners, and with cool utils (weather, calculator, unit conversions) Tweaked the tab-to-search onboarding result to not be dismissed too easily; now it requires 3 interactions (simply selecting the result counts as one) – Bug 1675611 Based on user-testing feedback, mostly to reduce the surprise impact, empty strings in search mode don’t show anymore the last executed searches – Bug 1675537 Search mode colors are now inverted on the Dark theme – Bug 1671668 Allow to complete @keywords with the Tab key – Bug 1669526 Url canonization (CTRL+Enter) does not happen anymore if a CTRL+V just happened and CTRL was not released before pressing Enter – Bug 1661000 Fixed an issue in both the search bar and the address bar causing the last keyup event to reach content – Bug 1641287, Bug 1673299 Fixed a regression where single words (like “space”) in search mode could open the “Did you mean to go to space” notification bar in case of a wildcard DNS. – Bug 1672509 Sync The tokenserver (which runs python 2.7 and supports Firefox Sync) is being ported to Rust in Q4. You can follow along with progress here. A minor change to better work with Spanner was made to our batch commit limit. See the Spanner docs for more details on mutation limits. User Journey Allow setting Firefox as the default browser with a remote message action Fixed some tab focus related issues with what’s new panel WebRTC UI The new WebRTC global indicator goes out for macOS and Windows today! \o/ This means system tray indicator icons for Windows, as well as an always-on-top indicator when sharing a display. There are also global mutes for the microphone and camera, but these are off by default.   [Less]
Posted over 3 years ago by The Rust Release Team
The Rust team is happy to announce a new version of Rust, 1.48.0. Rust is a programming language that is empowering everyone to build reliable and efficient software. If you have a previous version of Rust installed via rustup, getting Rust 1.48.0 is ... [More] as easy as: rustup update stable If you don't have it already, you can get rustup from the appropriate page on our website, and check out the detailed release notes for 1.48.0 on GitHub. What's in 1.48.0 stable The star of this release is Rustdoc, with a few changes to make writing documentation even easier! See the detailed release notes to learn about other changes not covered by this post. Easier linking in rustdoc Rustdoc, the library documentation tool included in the Rust distribution, lets you write documentation in Markdown. This makes it very easy to use, but also has some pain points. Let's say that you are writing some documentation for some Rust code that looks like this: pub mod foo { pub struct Foo; } pub mod bar { pub struct Bar; } We have two modules, each with a struct inside. Imagine we wanted to use these two structs together; we may want to note this in the documentation. So we'd write some docs that look like this: pub mod foo { /// Some docs for `Foo` /// /// You may want to use `Foo` with `Bar`. pub struct Foo; } pub mod bar { /// Some docs for `Bar` /// /// You may want to use `Bar` with `Foo`. pub struct Bar; } That's all well and good, but it would be really nice if we could link to these other types. That would make it much easier for the users of our library to navigate between them in our docs. The problem here is that Markdown doesn't know anything about Rust, and the URLs that rustdoc generates. So what Rust programmers have had to do is write those links out manually: pub mod foo { /// Some docs for `Foo` /// /// You may want to use `Foo` with [`Bar`]. /// /// [`Bar`]: ../bar/struct.Bar.html pub struct Foo; } pub mod bar { /// Some docs for `Bar` /// /// You may want to use `Bar` with [`Foo`]. /// /// [`Foo`]: ../foo/struct.Foo.html pub struct Bar; } Note that we've also had to use relative links, so that this works offline. Not only is this process tedious, and error prone, but it's also just wrong in places. If we put a pub use bar::Bar in our crate root, that would re-export Bar in our root. Now our links are wrong. But if we fix them, then they end up being wrong when we navigate to the Bar that lives inside the module. You can't actually write these links by hand, and have them all be accurate. In this release, you can use some syntax to let rustdoc know that you're trying to link to a type, and it will generate the URLs for you. Here's two different examples, based off of our code before: pub mod foo { /// Some docs for `Foo` /// /// You may want to use `Foo` with [`Bar`](crate::bar::Bar). pub struct Foo; } pub mod bar { /// Some docs for `Bar` /// /// You may want to use `Bar` with [`crate::foo::Foo`]. pub struct Bar; } The first example will show the same text as before; but generate the proper link to the Bar type. The second will link to Foo, but will show the whole crate::foo::Foo as the link text. There are a bunch of options you can use here. Please see the "Linking to items by name" section of the rustdoc book for more. There is also a post on Inside Rust on the history of this feature, written by some of the contributors behind it! Adding search aliases You can now specify #[doc(alias = "")] on items to add search aliases when searching through rustdoc's UI. This is a smaller change, but still useful. It looks like this: #[doc(alias = "bar")] struct Foo; With this annotation, if we search for "bar" in rustdoc's search, Foo will come up as part of the results, even though our search text doesn't have "Foo" in it. An interesting use case for aliases is FFI wrapper crates, where each Rust function could be aliased to the C function it wraps. Existing users of the underlying C library would then be able to easily search the right Rust functions! Library changes The most significant API change is kind of a mouthful: [T; N]: TryFrom> is now stable. What does this mean? Well, you can use this to try and turn a vector into an array of a given length: use std::convert::TryInto; let v1: Vec = vec![1, 2, 3]; // This will succeed; our vector has a length of three, we're trying to // make an array of length three. let a1: [u32; 3] = v1.try_into().expect("wrong length"); // But if we try to do it with a vector of length five... let v2: Vec = vec![1, 2, 3, 4, 5]; // ... this will panic, since we have the wrong length. let a2: [u32; 3] = v2.try_into().expect("wrong length"); In the last release, we talked about the standard library being able to use const generics. This is a good example of the kinds of APIs that we can add with these sorts of features. Expect to hear more about the stabilization of const generics soon. Additionally, five new APIs were stabilized this release: slice::as_ptr_range slice::as_mut_ptr_range VecDeque::make_contiguous future::pending future::ready The following previously stable APIs have now been made const: Option::is_some Option::is_none Option::as_ref Result::is_ok Result::is_err Result::as_ref Ordering::reverse Ordering::then See the detailed release notes for more. Other changes There are other changes in the Rust 1.48.0 release: check out what changed in Rust, Cargo, and Clippy. Contributors to 1.48.0 Many people came together to create Rust 1.48.0. We couldn't have done it without all of you. Thanks! [Less]
Posted over 3 years ago by Owen Bennett
For a number of years now, we have been working hard to update and secure one of the oldest parts of the Internet, the Domain Name System (DNS). We passed a key milestone in that endeavor earlier this year, when we rolled out the technical solution ... [More] for privacy and security in the DNS – DNS-over-HTTPS (DoH) – to Firefox users in the United States. Given the transformative nature of this technology and our mission commitment to transparency and collaboration, we have consistently sought to implement DoH thoughtfully and inclusively. Therefore, as we explore how to bring the benefits of DoH to Firefox users in different regions of the world, we’re today launching a comment period to help inform our plans. Some background Before explaining our comment period, it’s first worth clarifying a few things about DoH and how we’re implementing it: What is the ‘DNS’? The Domain Name System (DNS for short) is a shared, public database that links a human-friendly name, such as www.mozilla.org, to a computer-friendly series of numbers, called an IP address (e.g. 192.0.2.1). By performing a “lookup” in this database, your web browser is able to find websites on your behalf. Because of how DNS was originally designed decades ago, browsers doing DNS lookups for websites — even for encrypted https:// sites — had to perform these lookups without encryption. What are the security and privacy concerns with traditional DNS? Because there is no encryption in traditional DNS, other entities along the way might collect (or even block or change) this data. These entities could include your Internet Service Provider (ISP) if you are connecting via a home network, your mobile network operator (MNO) if you are connecting on your phone, a WiFi hotspot vendor if you are connecting at a coffee shop, and even eavesdroppers in certain scenarios. In the early days of the Internet, these kinds of threats to people’s privacy and security were known, but not being exploited yet. Today, we know that unencrypted DNS is not only vulnerable to spying but is being exploited, and so we are helping the Internet to make the shift to more secure alternatives. That’s where DoH comes in. What is DoH and how does it mitigate these problems? Following the best practice of encrypting HTTP traffic, Mozilla has worked with industry stakeholders at the Internet Engineering Task Force (IETF) to define a DNS encryption technology called DNS over HTTPS or DoH (pronounced “dough”), specified in RFC 8484. It encrypts your DNS requests, and responses are encrypted between your device and the DNS resolver via HTTPS. Because DoH is an emerging Internet standard, operating system vendors and browsers other than Mozilla can also implement it. In fact, Google, Microsoft and Apple have either already implemented or are in late stages of implementing DoH in their respective browsers and/or operating systems, making it a matter of time before it becomes a ubiquitous standard to help improve security on the web. How has Mozilla rolled out DoH so far? Mozilla has deployed DoH to Firefox users in the United States, and as an opt-in feature for Firefox users in other regions. We are currently exploring how to expand deployment beyond the United States. Consistent with Mozilla’s mission, in countries where we roll out this feature the user is given an explicit choice to accept or decline DoH, with a default-on orientation to protect user privacy and security. Importantly, our deployment of DoH adds an extra layer of user protection beyond simple encryption of DNS lookups. Our deployment includes a Trusted Recursive Resolver (TRR) program, whereby DoH lookups are routed only to DNS providers who have made binding legal commitments to adopt extra protections for user data (e.g., to limit data retention to operational purposes and to not sell or share user data with other parties). Firefox’s deployment of DoH is also designed to respect ISP offered parental control services where users have opted into them and offers techniques for it to operate with enterprise deployment policies. The comment period As we explore bringing the benefits of DoH to more users, in parallel, we’re launching a comment period to crowdsource ideas, recommendations, and insights that can help us maximise the security and privacy-enhancing benefits of our implementation of DoH in new regions. We welcome contributions for anyone who cares about the growth of a healthy, rights-protective and secure Internet. Engaging with the Mozilla DoH implementation comment period Length: The global public comment period will last for a total of 45 days, starting from November 19, 2020 and ending on January 4, 2021. Audience: The consultation is open to all relevant stakeholders interested in a more secure, open and healthier Internet across the globe. Questions for Consultation: A detailed set of questions which serve as a framework for the consultation are available here. It is not mandatory to respond to all questions. Submitting comments: All responses can be submitted in plaintext or in the form of an accessible pdf to [email protected]. Unless the author/authors explicitly opt-out in the email in which they submit their responses, all genuine responses will be made available publicly on our blog. Submissions that violate our Community Participation Guidelines will not be published. Our goal is that DoH becomes as ubiquitous for DNS as HTTPS is for web traffic, supported by ISPs, MNOs, and enterprises worldwide to help protect both end users and DNS providers themselves. We hope this public comment will take us closer to that goal, and we look forward to hearing from stakeholders around the world in creating a healthier Internet. The post Mozilla DNS over HTTPS (DoH) and Trusted Recursive Resolver (TRR) Comment Period: Help us enhance security and privacy online appeared first on Open Policy & Advocacy. [Less]
Posted over 3 years ago by Owen Bennett
For a number of years now, we have been working hard to update and secure one of the oldest parts of the Internet, the Domain Name System (DNS). We passed a key milestone in that endeavor earlier this year, when we rolled out the technical solution ... [More] for privacy and security in the DNS – DNS-over-HTTPS (DoH) – to Firefox users in the United States. Given the transformative nature of this technology and our mission commitment to transparency and collaboration, we have consistently sought to implement DoH thoughtfully and inclusively. Therefore, as we explore how to bring the benefits of DoH to Firefox users in different regions of the world, we’re today launching a comment period to help inform our plans. Some background Before explaining our comment period, it’s first worth clarifying a few things about DoH and how we’re implementing it: What is the ‘DNS’? The Domain Name System (DNS for short) is a shared, public database that links a human-friendly name, such as www.mozilla.org, to a computer-friendly series of numbers, called an IP address (e.g. 192.0.2.1). By performing a “lookup” in this database, your web browser is able to find websites on your behalf. Because of how DNS was originally designed decades ago, browsers doing DNS lookups for websites — even for encrypted https:// sites — had to perform these lookups without encryption. What are the security and privacy concerns with traditional DNS? Because there is no encryption in traditional DNS, other entities along the way might collect (or even block or change) this data. These entities could include your Internet Service Provider (ISP) if you are connecting via a home network, your mobile network operator (MNO) if you are connecting on your phone, a WiFi hotspot vendor if you are connecting at a coffee shop, and even eavesdroppers in certain scenarios. In the early days of the Internet, these kinds of threats to people’s privacy and security were known, but not being exploited yet. Today, we know that unencrypted DNS is not only vulnerable to spying but is being exploited, and so we are helping the Internet to make the shift to more secure alternatives. That’s where DoH comes in. What is DoH and how does it mitigate these problems? Following the best practice of encrypting HTTP traffic, Mozilla has worked with industry stakeholders at the Internet Engineering Task Force (IETF) to define a DNS encryption technology called DNS over HTTPS or DoH (pronounced “dough”), specified in RFC 8484. It encrypts your DNS requests, and responses are encrypted between your device and the DNS resolver via HTTPS. Because DoH is an emerging Internet standard, operating system vendors and browsers other than Mozilla can also implement it. In fact, Google, Microsoft and Apple have either already implemented or are in late stages of implementing DoH in their respective browsers and/or operating systems, making it a matter of time before it becomes a ubiquitous standard to help improve security on the web. How has Mozilla rolled out DoH so far? Mozilla has deployed DoH to Firefox users in the United States, and as an opt-in feature for Firefox users in other regions. We are currently exploring how to expand deployment beyond the United States. Consistent with Mozilla’s mission, in countries where we roll out this feature the user is given an explicit choice to accept or decline DoH, with a default-on orientation to protect user privacy and security. Importantly, our deployment of DoH adds an extra layer of user protection beyond simple encryption of DNS lookups. Our deployment includes a Trusted Recursive Resolver (TRR) program, whereby DoH lookups are routed only to DNS providers who have made binding legal commitments to adopt extra protections for user data (e.g., to limit data retention to operational purposes and to not sell or share user data with other parties). Firefox’s deployment of DoH is also designed to respect ISP offered parental control services where users have opted into them and offers techniques for it to operate with enterprise deployment policies. The comment period As we explore bringing the benefits of DoH to more users, in parallel, we’re launching a comment period to crowdsource ideas, recommendations, and insights that can help us maximise the security and privacy-enhancing benefits of our implementation of DoH in new regions. We welcome contributions for anyone who cares about the growth of a healthy, rights-protective and secure Internet. Engaging with the Mozilla DoH implementation comment period Length: The global public comment period will last for a total of 45 days, starting from November 19, 2020 and ending on January 4, 2021. Audience: The consultation is open to all relevant stakeholders interested in a more secure, open and healthier Internet across the globe. Questions for Consultation: A detailed set of questions which serve as a framework for the consultation are available here. It is not mandatory to respond to all questions. Submitting comments: All responses can be submitted in plaintext or in the form of an accessible pdf to [email protected]. Unless the author/authors explicitly opt-out in the email in which they submit their responses, all genuine responses will be made available publicly on our blog. Submissions that violate our Community Participation Guidelines will not be published. Our goal is that DoH becomes as ubiquitous for DNS as HTTPS is for web traffic, supported by ISPs, MNOs, and enterprises worldwide to help protect both end users and DNS providers themselves. We hope this public comment will take us closer to that goal, and we look forward to hearing from stakeholders around the world in creating a healthier Internet. The post Mozilla DNS over HTTPS (DoH) and Trusted Recursive Resolver (TRR) Comment Period: Help us enhance security and privacy online appeared first on Open Policy & Advocacy. [Less]