Posted
over 7 years
ago
by
Denelle Dixon
We believe that the young people who would benefit from the Deferred Action for Childhood Arrivals (DACA) deserve the opportunity to take their full and rightful place in the U.S. The possible changes to the DACA that were recently reported would
... [More]
remove all benefits and force people out of the U.S. – that is simply unacceptable.
Removing DREAMers from classrooms, universities, internships and workforces threaten to put the very innovation that fuels our technology sector at risk. Just as we said with previous Executive Orders on Immigration, the freedom for ideas and innovation to flow across borders is something we strongly believe in as a tech company. More importantly it is something we know is necessary to fulfill our mission to protect and advance the internet as a global public resource that is open and accessible to all.
We can’t allow talent to be pushed out or forced into hiding. We also shouldn’t stand by and allow families to be torn apart. More importantly, as employers, industry leaders and Americans — we have a moral obligation to protect these children from ill-willed policies and practices. Our future depends on it.
We want DREAMers to continue contributing to this country’s future and we do not want people to live in fear. We urge the Administration to keep the DACA program intact. At the same time, we urge leaders in government to enact a bipartisan permanent solution, one that will allow these bright minds to prosper in the country we know and love.
The post Statement on U.S. DACA Program appeared first on The Mozilla Blog. [Less]
|
Posted
over 7 years
ago
by
[email protected] (Rabimba Karanjai)
The Open Networking Summit took place on April 3-6 – where Enterprise, Cloud and Service Providers gathered in Santa Clara, California to share insights, highlight innovation and discuss the future of open source networking. I was invited to give a
... [More]
talk about Web Virtual Reality and aframe at it.
So, Open Networking Summit (ONS) actually consists of two events – there might be more, but I was involved with two. ONS is the big event itself. There is also the Symposium on SDN Research (SOSR). This is an academic conference that accepts papers.
There were some pretty fantastic papers at the conference. My favorite one – there was a system called “NEAt: Network Error Auto-Correct”. The idea here is that the system keeps track of what’s going on with your network and notices problems and automatically corrects them. It was designed for an SDN setup where you have a controller that is responding to changes in the network and telling systems what to do.
The event was held at San Jose Convention center and was pretty packed up.
Keynotes were sprawled across the first floor with a very big auditorium that encompassed the whole of the first floor. The individual talks were assigned different rooms on the two floors.Poster sessions were held on the second floor near another hall where the accompanying talks with the poster were going on.The talks were not recorded. I had roughly 35 people in my talk and that was a pretty perfect number of audience to have without being too overwhelmed.A previous version of my talk is available here. It would be great to have some feedback on it, though the content has changed quite a bit after that
I frankly received quite a lot of interest in the talk and questions regarding it. The questions mostly were involving authoring tool for WebVR and about how we can create scene's that can interact with industrial hardware.Something that urged me to work on some pet projects I will write on later about.What do you think about how networking and industry can merge with WebVR and VR in general? Let me know by comments or tweeter. I will be posting soon my take on it with a few live example and demos.
[Less]
|
Posted
over 7 years
ago
by
allen
Dave Winer recently blogged about his initial thoughts after dipping his toes into using some modern JavaScript features . He ends by suggesting that I might have some explanations and stories about the features he are using. I’ve given talks that
... [More]
cover some of this and normally I might just respond via some terse tweets. But Dave believes that blog posts should be responded to by blog posts so I’m taking a try at blogging back to him.
What To Call It?
The JavaScript language is defined by a specification maintained by the Ecma International standards organization. Because of trademark issues, dating back to 1996, the specification could not use the name JavaScript. So they coined the name ECMAScript instead. Contrary to some myths, ECMAScript and JavaScript are not different languages. “ECMAScript” is simply the name used within the specification where it would really like to say “JavaScript”.
Standards organizations like to identify documents using numbers. The ECMAScript specification’s number is ECMA-262. Each time an update to the specification is approved as “the standard” a new edition of ECMA-262 is released. Editions are sequentially numbered. Dave said “ES6 is the newest version of JavaScript”. So, what is “ES6”? ES6 is colloquial shorthand for “ECMA-262, Edition 6”. ES6 was published as a standard in 2015. The actual title of the ES6 specification is ECMAScript 2015 Language Specification and the preferred shorthand name is ECMAScript 2015 or just ES2015.
So, why the year-based designation? The 6th edition of ECMA-262 took a long time to develop, arguably 15 years. As ES6 was approaching publication, TC39 (the Technical Committee within Ecma International that develops the ECMAScript specifications) already knew that it wanted to change its process in a way that enabled yearly maintenance updates. That meant a new edition of ECMA-262 every year with a new edition number. After a few years we would be talking about ES6, ES7, ES8, ES9, ES10, ES11, etc. Those numbers quickly loose any context for people who aren’t deeply involved in the standards development process. Who would know if the current standard ES7, or ES8, or ES9? Was some feature introduced in ES6 or ES7? TC39 couldn’t eliminate the actual edition numbers (standards organizations love their document numbers) but it could change the document title. We decide that TC39 would incorporate the year of release into the documents title and to encourage people to use the year when referring to a specific edition. So, the “newest version of JavaScript” is ECMA-262, Edition 8 and its title is ECMAScript 2017 Language Specification. Some people still refer to it as ES8, but the preferred shorthand name is ECMAScript 2017 or just ES2017.
But saying “ECMAScript” or mentioning specific ECMAScript editions is confusing to many people and probably is unnecessary for most situations. The common name of the language really is JavaScript and unless you are talking about the actual specification document you probably don’t need to utter “ECMAScript”. But you may need to distinguish between old versions of JavaScript and what is implemented by newer, modern implementations. The big change in the language and its specification occurred with ES2015. The subsequent editions make relatively small incremental extensions and corrections to what was standardized in 2015. So, here is my recommendation. Generally you should just say “JavaScript” meaning the language as it is used in browsers, Node.js, and other environments. If you need to specifically talk about JavaScript implementations that are based upon ECMAScript specifications published prior to ES2015 say “legacy JavaScript”. If you need to specifically talk about JavaScript that includes ES2015 (or later) features say “modern JavaScript”.
Can You Use It Yet?
Except for modules almost all of ES2015-ES2017 is implemented in the current versions of all the major evergreen browsers (Chrome, Firefox, Safari, Edge). Also in current versions of Node.js. If you need to write code that will run on non-evergreen browsers such as IE you can use Babel to pre-compile modern JavaScript code into legacy JavaScript code.
Module support exists in all of the evergreen browsers, but some of them still require setting a flag to use it. Native ECMAScript module support will hopefully ship in Node.js in spring 2018. In the meantime @std/esm enables use of ECMAScript modules in current Node releases.
Block Scoped Declaration (let and const)
The main motivation for block scoped declarations was to eliminate the “closure in loop” bug hazard that may JavaScript programmer have encountered when they set event handlers within a loop. The problem is that var declarations look like they should be local to the loop body but in fact are hoisted to the top of the current function and hence each event handler defined in the loop use the last value assigned to such variables.
Replacing var with let gives each iteration of the loop a distinct variable binding. So each event handler captures different variables with the values that were current when the event handler was installed:
The hardest part about adding block scoped declaration to ECMAScript was coming up with a rational set of rules for how the declaration should interact with the already existing var declaration form. We could not change the semantics of var without breaking backwards compatibility, which is something we try to never do. But, we didn’t want to introduce new WTF surprises in programs that use both var and let. Here are the basic rules we eventually arrived at:
Most browsers, except for IE, had implemented const declarations (but without block scoping) starting in the early naughts. Firefox implemented block scoped let declaration (but not exactly the same semantics as ES2015) in 2006. By the time TC39 started serious working on what ultimately became ES2015, the keywords const and let had become ingrained in our minds such that we didn’t really consider any other alternatives. I regret that. In retrospect, I think we should have used let in place of const for declaring immutable variable bindings because that is the most common use case. In fact, I’m pretty sure that many developers use let instead of const for variable they don’t intend to change, simply because let has fewer characters to type. If we had used let in place of const then perhaps var would have been adequate for the relatively rare cases where a mutable variable binding is needed. A language with only let and var would have been simpler then what we ended up with using const, let, and var.
Arrow Functions
One of the primary motivations for arrow functions was to eliminate another JavaScript bug hazard. The “wrong this” problem that occurs when you capture a function expression (for example, as an event handler) but forget that this used inside the function expression will not be the same value as this in the context where you created the function expression. Conciseness was a consideration in the design of arrow functions, but fixing the “wrong this” problem was the real driver.
I’ve heard several JS programmers comment that at first they didn’t like arrow functions but that they grew upon them over time. Your mileage may vary. Here are a couple of good articles that address arrow function reluctance.
Modules
Actually, ES modules weren’t inspired by Node modules. But a lot of work went into making them feel familiar to people who were used to Node modules. In fact, ES modules are semantically more similar to the Pascal modules that Dave remembers then they are to Node modules. The big difference is that in the ES design (and Pascal modules) the interfaces between modules are statically defined while in the Node modules design module interfaces are dynamically defined. With static module interfaces the inter-dependencies between a set of modules are precisely defined by the source code prior to executing any code. With dynamic modules, the module interfaces cannot be fully understood without actually executing the code of the modules. Or stated another way, ES module interfaces are declaratively defined while Node module interfaces are imperatively defined. Static modules systems better support creation of ahead-of-time tools such as accurate module dependency linters or module linkers. Such tools for dynamic module interfaces usually depends upon applying heuristics that analyze modules as if they had static interfaces. Such analysis can be wrong if the actual dynamically interfaces construction does things that the heuristics didn’t account for.
The work on the ES module design actually started before the first release of Node. There were early proposals for dynamic module interfaces that are more like what Node adopted. But TC39 made an early decision that declarative static module interfaces were a better design, for the long term. There has been much controversy about this decision. Unfortunately, it has created issues for Node which have been difficult for them to resolve. If TC39 had anticipated the rapid adoption of Node and the long time it would take to finish “ES6” we might have taken the dynamic module interface path. I’m glad we didn’t and I think it is becoming clear that we made the right choice.
Promises
Strictly speaking, the legacy JavaScript language didn’t do async at all. It was host environments such as browsers and Node that defined the APIs that introduced async programming into JavaScript.
ES2015 needed to include promises because they were being rapidly adopted by the developer community (include by new browser APIs) and we wanted to avoid the problem of competing incompatible promise libraries or of a browser defined promise API that didn’t take other host environments into consideration.
The real benefit of ES2015 promises is that they provided a foundation for better async abstractions that do bury more of the BS within the runtime. Async functions, introduced in ES2017 are the “better way” to do async. In the pipeline for the near future is Async Iteration which further simplifies a common async use case. [Less]
|
Posted
over 7 years
ago
by
Air Mozilla
Intern Presentations 6 presenters Time: 1:00PM - 2:30PM (PDT) - each presenter will start every 15 minutes 6 SF
|
Posted
over 7 years
ago
by
Fernando Serrano
A brief introduction
When creating WebVR experiences developers usually face a common problem: it’s hard to find assets other than just basic primitives. There’re several 3D packages to generate custom objects and scenes that use custom file
... [More]
formats, and although they give you the option to export to a common file format like Collada or OBJ, each exporter saves the information in a slightly different way. Because of these differences, when we try to import these files in the 3D engine that we are using, we often find that the result that we see on the screen is quite different from what we created initially.
The Khronos Group created the glTF 3d file format to have an open, application agnostic and well defined structure that can be imported and exported in a consistent way. The resulting file is smaller than most of the availables alternatives, it’s also optimized for real time applications to be fast to read since we don’t need to consolidate the data. Once we’ve read the buffers we can push them directly to the GPU.
The main features that glTF provides and a 3D file format comparison can be found in this article by Juan Linietsky
A few months ago feiss wrote an introduction to the glTF workflow he used to create the assets for our A-Saturday-Night demo.
Many things have improved since then. The glTF blender exporter is already stable and has glTF 2.0 support. The same goes for three.js and A-Frame: both have a much better support for 2.0.
Now, most of the pain he experienced by converting from Blender to collada and then to glTF has gone, and we can export directly to glTF from Blender.
glTF is here to stay and its support has grown widely in the last months, being available in most of the 3D web engines and applications out there like three.js, babylonjs, cesium, sketchfab, blocks ...
The following video from the first glTF BOF (held on Siggraph this year) illustrates how the community has embraced the format:
glTF Exporter on the web
One of the most requested features for A-Painter has been the ability to export to some standard format so people could reuse the drawing as an asset or placeholder in 3D content creation software (3ds Max, Maya,...) or engines like Unity or Unreal.
I started playing with the idea of exporting to OBJ but lot of changes were required on the original three.js exporter because of the lack of triangle_strip fully support so I left it in standby.
#A-painter triangleStrip lines exporter to OBJ, #wip :) /cc @utopiah @feiss #aframevr pic.twitter.com/skxbcJtoXy— Fernando Serrano (@fernandojsg) January 16, 2017
After seeing all the industry support and adoption of glTF at Siggraph 2017 I decided to give it a second try.
The work was much easier than expected thanks to the nice THREE.js / A-Frame loaders that Don McCurdy and Takahiro have been driving. I thought it would be great to export content created directly on the web to glTF, and it would serve as a great excuse to go deep on the spec and understand it better.
glTF Exporter in three.js
Thanks to the great glTF spec documentation and examples, I got a glTF exporter working pretty fast.
The first version of the exporter has already landed in r87 is still in early stage and under development. There’s an open issue If you want to get involved and follow the conversations about the missing features: https://github.com/mrdoob/three.js/issues/11951
API
The API follows the same structure of the existing exporters available in three.js:
Create an instance of THREE.GLTFExporter.
Call parse with the objects or scene that you want to export.
Get the result in a callback and use it as you want.
var gltfExporter = new THREE.GLTFExporter();
gltfExporter.parse( input, function( result ) {
var output = JSON.stringify( result, null, 2 );
console.log( output );
downloadJSON( output, 'scene.gltf' );
}, options );
More detailed and updated information for the API can be found on the three.js docs
Together with the exporter I created a simple example in three.js trying to combine the different type of primitives, helpers, rendering modes and materials and exposing all the options the exporter has so we could use it as a testing scene through the development
Integration in three.js editor
The integration with the three.js editor was pretty straightforward and I think it’s one of the most useful features, since the editor supports importing plenty of 3d formats, it can be used an an advanced converter from these formats to glTF, allowing the user to delete unneeded data, tweak parameters, modify materials etc before exporting.
glTF Exporter on A-Frame
Please note that as three.js v87 is required to use the GLTFExporter currently just master branch of A-Frame is supported, and the first stable version compatible will be 0.7.0 to be released later this month.
Integration with A-Frame inspector
After the successful integration with three.js’ editor the next step was to integrate the same functionality into the A-Frame inspector.
I’ve added two options to export the content to GLTF:
Clicking on the export icon on the scenegraph will export the whole scene to glTF
Clicking on the entity’s attributes panel will export the selected entity to glTF
Exporter component in A-Frame
Last but not least, I’ve created an A-Frame component so users could export scenes and entities programmatically.
The API is quite simple, just call the export function from the gltf-exporter system:
sceneEl.systems['gltf-exporter'].export(input, options);
The function accepts severals different input values: None (export the whole scene), one entity, an array of entities, or a NodeList (eg: the result from a querySelectorAll)
The options accepted are the same as the original three.js function.
A-Painter exporter
The whole history wouldn’t be complete if the initial issue that made me go into glTF wasn’t satisfied :) After all the previous work described above it was trivial to add support to export to gltf in A-Painter.
Include the aframe-gltf-exporter-component script:
Attach the component to a-scene:
And finally register a shortcut (g) to save the current drawing to glTF:
if (event.keyCode === 71) {
// Export to GTF (g)
var drawing = document.querySelector('.a-drawing');
self.sceneEl.systems['gltf-exporter'].export(drawing);
}
Extra: Exporter bookmarklet
While developing the exporter I found very useful creating a bookmarklet to inject the exporter code on every A-Frame or three.js page. This way I could just export the whole scene by clicking on it.
If A-FRAME is defined it will export AFRAME.scenes[0] as is the default scene loaded. If not, it will try to look for the global variable scene that is the most commonly used in three.js examples.
It is not bulletproof so you may need to do some changes if it doesn’t work on your app, probably by looking for something else than scene.
To use it you should create a new bookmark on your browser and paste the following code on the URL input box:
What’s next?
From Mozilla we are committed to help improving the glTF specification and its ecosystem.
glTF will keep evolving and many interesting features are being proposed on the roadmap discussion. If you have any suggestion don't hesitate to comment there, since all proposals are being discussed and taken into account.
As I stated before the glTF exporter is still in an early stage but it’s being actively developed so please feel free to jump into the discussion to prioritize on new features.
Finally: wouldn't it be great to see more content creation tools on the web with glTF support so you don't depend on a desktop application to generate your assets?. [Less]
|
Posted
over 7 years
ago
by
David Bryant
Mozilla Developer Roadshow events are fun, informative sessions for people who build the Web. Over the past eight months we’ve held thirty-six events all over the world sharing the word about the latest in Mozilla and Firefox technologies. Now we’re
... [More]
heading to Asia with the goals of finding local experts and connecting the community. Some of our most successful moments have been when we were able to bring local event organizers together to forge lasting relationships. Our first Asia event is in Singapore at the PayPal headquarters on September 19. (Check here for a full list of the cities.)I’m excited to be coming along and be part of some of those events and so wanted to know what to anticipate plus get a little perspective from someone immersed in the local developer community. To do that I chatted with Hui Jing Chen, a front-end engineer based in Singapore who speaks globally on CSS Grid.Q: What would you like to have come out of the event in Singapore? Should we look forward to more opportunities for collaboration between Mozilla and developers in Singapore and Asia?Hui Jing (HJ): I definitely want to have more collaboration between Mozilla and developers in this region (Southeast Asia). I am aware that a lot of the work on web technologies comes out from Europe or North America, and there are lots of factors at play here, including the fact that digital computing was kickstarted in those regions. But it is the WORLD wide web, and I think it is important that developers from other regions contribute to the development of the web as well. For example, WebRTC expert Dr. Alex. Gouaillard, runs CoSMo Software Consultancy out of Singapore, and they are the key contributors to WebRTC’s development. Understandably, it will take time for our region to catch up, but I hope events like this encourage developers in the region to not only be users of web technologies, but shapers of them as well.David (DB): And independent of where the technology might come from, clearly the use of the web on a day-to-day basis is as much if not more so driven by what people are doing in Asia and the information (or experiences) they need. We know from our steady stream of developer relations efforts and our Tech Speakers activities that the more engaged we are with developers in this region the richer the web will be and the better sense we’ll have of where the web needs to go. So yes, more opportunities for collaboration would be marvelous!Q: Meetups have been great regional allies for our Developer Roadshows — What are the unique cultural aspects of the Singapore/Malaysia MeetUp Communities?HJ: My web development career has taken place completely in Singapore, so I can only speak about the Singapore meetup community, but I find that there is less “networking” at the meetups, in that, you’ll see pockets of people chatting with each other, but a large number of people show up to listen to the talk then leave immediately after. Maybe this happens universally, I can’t say for sure that this is unique though.DB: That’s something we’ve heard and seen elsewhere too. In part that’s why we like the smaller, more frequent, more community-oriented approach we’ve taken for our Developer Roadshows as opposed to more traditional conference-style events. Our hope is that keeping it more intimate, hosting jointly with well-established local partners, and engaging with an existing local community will give people a more comfortable way of considering ongoing collaboration opportunities yet still have an informative core topic that brings them together in the first place.Q: Tell me a little bit about some challenges working with and participating with the community.HJ: I’m the co-organizer of Talk.CSS, which is Singapore’s CSS meetup, and in general, the challenge is in finding new speakers. The community in Singapore is really great, so finding venues is never the problem, it’s usually getting people to speak that is much trickier. I sometimes joke that I’m amazed I still have friends left because I’ve almost strong-armed all of them to speak at my meetup at some point in time, and they’re all too polite to say no. This could be an Asian thing, but people here are a bit more reserved, and if they’ve done something cool, they’re less compelled to stand up in front of everyone and share what they did.DB: Hmmm, perhaps that’s something we can help you with. (And I mean the finding speakers part, not the still having friends part. :-)Q: Every region has its particular special interests and strengths. What are some things that the Singapore and possibly Malaysian community does exceptionally well?HJ: Singapore has an exceptionally strong tech community (at least from what I’ve heard from my friends outside of Singapore). This can be attributed to the efforts from key people, who we will hopefully meet in Singapore, who are super active when it comes to organizing events, helping out newbies, encouraging developers to start their own meetups, and generally just making the tech community in Singapore really vibrant.For example, webuild.sg is the go-to resource for all the tech meetups in Singapore, which is especially helpful if you want to start your own. They also have their own podcast, where they interview developers on their respective areas of expertise. Engineers.sg was originally a one-man operation which records almost every tech meetup in Singapore, and has now expanded into an entirely volunteer run team.DB: I wasn’t familiar with webuild.sg, but now that you’ve pointed it out to me I keep finding valuable and informative information on the site, for example on organizing events and contributing to open source. So it’s not only a vital resource for the community in Singapore but valuable elsewhere too.Q: What expectations should we have as a team visiting from the US/Europe?HJ: Locals are generally more reserved, in that, usually the people who ask questions or speak up more are foreigners from Western countries. There is a sizeable population of developers from all over the world here in Singapore, so meetup attendance is very diverse. It seems that most people are more comfortable approaching speakers individually after the talk rather than during an open Q&A session.DB: Individual conversations afterward are something I know our presenters and Roadshow team like very much too. I think our format for the Developer Roadshow works well for that so am looking forward to meeting people and talking to them one-on-one.Q: Diversity and inclusion are very much highlighted in our tech communities — is this an issue of discussion here in Singapore?HJ: These issues are not as hotly discussed here as in America, I think, largely because Singapore has always been a multicultural society. I’m not saying racism and misogyny do not exist here, but I dare say very few people are overtly so. I think the gender ratio in tech is male-dominated all over the world, including here.DB: Certainly this is an issue that varies by region, though we’re committed to expressing our support for diversity and inclusion across all developer communities. That means, for example, having a clear code of conduct for events to promote the largest number of participants with the most varied backgrounds. And we love having these Developer Roadshow events play a part in that, having heard attendees express their delight when they meet other folks from similar backgrounds or come to hear presenters with diverse backgrounds. I know from talking to other people about their company’s developer outreach efforts that we’re going to see even more progress in this space going forward.Our Developer Roadshow events have been enjoyable and very popular, and I’m looking forward to the upcoming sessions in Asia. We’ll have more later on in the year in other locations around the world too, and by time 2017 is over will have held about fifty-five sessions — more than one a week. Hopefully one has been near enough to you for you to take part, and as we’re keen to keep the program will be again soon. Let us know if not, though, and we’ll see what we can do!Mozilla Developer Roadshow: Asia Chapter was originally published in Mozilla Tech on Medium, where people are continuing the conversation by highlighting and responding to this story. [Less]
|
Posted
over 7 years
ago
by
Air Mozilla
This is a weekly call with some of the Reps to discuss all matters about/affecting Reps and invite Reps to share their work with everyone.
|
Posted
over 7 years
ago
by
Matěj Cepl
(This has been comment on the episode of EconTalk)
It seems to me that however this interview was awesome (and it
was) it is still in the danger of being the same kind as the
prediction about colourful faxes.
I think we are standing on the edge of the end …
|
Posted
over 7 years
ago
by
Mozilla
Today Mozilla is announcing the launch of “Global Mission Partners: India”, an award program specifically focused on supporting open source and free software. The new initiative builds on the existing “Mission Partners” program. Applicants based in
... [More]
India can apply for funding to support any open source/free software projects which significantly further Mozilla’s mission.
Our mission, as embodied in our Manifesto, is to ensure the Internet is a global public resource, open and accessible to all; an Internet that truly puts people first, where individuals can shape their own experience and are empowered, safe and independent.
We know that many other software projects around the world, and particularly in India, share the goals of a free and open Internet with us, and we want to use our resources to help and encourage others to work towards this end.
If you are based in India and you think your project qualifies, Mozilla encourages you to apply. You can find the complete guidelines about this exciting award program on Mozilla’s wiki page.
The minimum award for a single application to the “Global Mission Partners: India” initiative is ₹1,25,000, and the maximum is ₹50,00,000.
The deadline for applications for the initial batch of “Global Mission Partners: India” is the last day of September 2017, at midnight Indian Time. Organizations can apply beginning today, in English or Hindi.
You can find a version of this post in Hindi here.
The post A ₹1 Crore Fund to Support Open Source Projects in India appeared first on The Mozilla Blog. [Less]
|
Posted
over 7 years
ago
The Rust team is happy to announce the latest version of Rust, 1.20.0. Rust
is a systems programming language focused on safety, speed, and concurrency.
If you have a previous version of Rust installed, getting Rust 1.20 is as easy as:
$ rustup
... [More]
update stable
If you don’t have it already, you can get rustup from the
appropriate page on our website, and check out the detailed release notes for
1.20.0 on GitHub.
What’s in 1.20.0 stable
In previous Rust versions, you can already define traits, structs, and enums
that have “associated functions”:
struct Struct;
impl Struct {
fn foo() {
println!("foo is an associated function of Struct");
}
}
fn main() {
Struct::foo();
}
These are called “associated functions” because they are functions that are
associated with the type, that is, they’re attached to the type itself, and
not any particular instance.
Rust 1.20 adds the ability to define “associated constants” as well:
struct Struct;
impl Struct {
const ID: u32 = 0;
}
fn main() {
println!("the ID of Struct is: {}", Struct::ID);
}
That is, the constant ID is associated with Struct. Like functions,
associated constants work with traits and enums as well.
Traits have an extra ability with associated constants that gives them some
extra power. With a trait, you can use an associated constant in the same way
you’d use an associated type: by declaring it, but not giving it a value. The
implementor of the trait then declares its value upon implementation:
trait Trait {
const ID: u32;
}
struct Struct;
impl Trait for Struct {
const ID: u32 = 5;
}
fn main() {
println!("{}", Struct::ID);
}
Before this release, if you wanted to make a trait that represented floating
point numbers, you’d have to write this:
trait Float {
fn nan() -> Self;
fn infinity() -> Self;
...
}
This is slightly unwieldy, but more importantly, because they’re functions, they
cannot be used in constant expressions, even though they only return a constant.
Because of this, a design for Float would also have to include constants as well:
mod f32 {
const NAN: f32 = 0.0f32 / 0.0f32;
const INFINITY: f32 = 1.0f32 / 0.0f32;
impl Float for f32 {
fn nan() -> Self {
f32::NAN
}
fn infinity() -> Self {
f32::INFINITY
}
}
}
Associated constants let you do this in a much cleaner way. This trait definition:
trait Float {
const NAN: Self;
const INFINITY: Self;
...
}
Leads to this implementation:
mod f32 {
impl Float for f32 {
const NAN: f32 = 0.0f32 / 0.0f32;
const INFINITY: f32 = 1.0f32 / 0.0f32;
}
}
much cleaner, and more versatile.
Associated constants were proposed in RFC 195, almost exactly three years ago. It’s
been quite a while for this feature! That RFC contained all associated items, not just
constants, and so some of them, such as associated types, were implemented faster than
others. In general, we’ve been doing a lot of internal work for constant evaluation,
to increase Rust’s capabilities for compile-time metaprogramming. Expect more on this
front in the future.
We’ve also fixed a bug with the include! macro in documentation tests: for relative
paths, it erroneously was relative to the working directory, rather than to the current file.
See the detailed release notes for more.
Library stabilizations
There’s nothing super exciting in libraries this release, just a number of solid
improvements and continued stabilizing of APIs.
The unimplemented! macro now accepts
messages that let you say why
something is not yet implemented.
We upgraded to Unicode 10.0.0.
min and max on floating point types were rewritten in
Rust, no longer relying on
cmath.
We are shipping mitigations against Stack
Clash in this
release, notably, stack probes, and skipping the main thread’s manual
stack guard on Linux. You don’t need to do anything to get these protections
other than using Rust 1.20.
We’ve added a new trio of sorting functions to the standard library:
slice::sort_unstable_by_key, slice::sort_unstable_by, and
slice::sort_unstable. You’ll note that these all have “unstable” in the name.
Stability is a property of sorting algorithms that may or may not matter to you,
but now you have both options! Here’s a brief summary: imagine we had a list
of words like this:
rust
crate
package
cargo
Two of these words, cargo and crate, both start with the letter c. A stable
sort that sorts only on the first letter must produce this result:
crate
cargo
package
rust
That is, because crate came before cargo in the original list, it must also be
before it in the final list. An unstable sort could provide that result, but could
also give this answer too:
cargo
crate
package
rust
That is, the results may not be in the same original order.
As you might imagine, less constraints often means faster results. If you don’t care
about stability, these sorts may be faster for you than the stable variants. As always,
best to check both and see! These functions were added by RFC 1884, if you’d like
more details, including benchmarks.
Additionally, the following APIs were also stabilized:
CStr::into_c_string
CString::as_c_str and CString::into_boxed_c_str
Chain::get_mut, Chain::get_ref, and Chain::into_inner
Option::get_or_insert_with and Option::get_or_insert
OsStr::into_os_string
OsString::into_boxed_os_str
Take::get_mut and Take::get_ref
Utf8Error::error_len
char::EscapeDebug and char::escape_debug
compile_error!
f32::from_bits and f32::to_bits
f64::from_bits and f64::to_bits
mem::ManuallyDrop
str::from_boxed_utf8_unchecked
str::as_bytes_mut
str::from_utf8_mut and str::from_utf8_unchecked_mut
str::get_unchecked and str::get_unchecked_mut
str::get and str::get_mut
str::into_boxed_bytes
See the detailed release notes for more.
Cargo features
Cargo has some nice upgrades this release. First of all, your crates.io
authentication token used to be stored in ~/.cargo/config. As a configuration
file, this would often be stored with 644 permissions, that is, world-readable.
But it has a secret token in it. We’ve moved the token to ~/.cargo/credentials,
so that it can be permissioned 600, and hidden from other users on your system.
If you used secondary binaries in a Cargo package, you know that they’re kept
in src/bin. However, sometimes, you want multiple secondary binaries that
have significant logic; in that case, you’d have src/bin/client.rs and
src/bin/server.rs, and any submodules for either of them would go in the
same directory. This is confusing. Instead, we now conventionally support
src/bin/client/main.rs and src/bin/server/main.rs, so that you can keep
larger binaries more separate from one another.
See the detailed release notes for more.
Contributors to 1.20.0
Many people came together to create Rust 1.20. We couldn’t have done it without
all of you. Thanks! [Less]
|