Posted
about 14 years
ago
Hi all,yt is in need of an awesome new logo, which is why we are announcing the first ever new logo contest! So here's the deal: We'll accept entries for the next two weeks. Let's arbitrarily say Tuesday, May 10th, at 11:59:59 pm EST. If there
... [More]
is more than one entry, we will put it up for a community vote at that time. Oh, and the most important part -- the winner will get a coffee mug with their new logo front and center!!!Please email [email protected] with your image attached. Good luck, and happy logo-ing! Sam
Permalink
| Leave a comment »
[Less]
|
Posted
about 14 years
ago
Hi all,yt is in need of an awesome new logo, which is why we are announcing the first ever new logo contest! So here's the deal: We'll accept entries for the next two weeks. Let's arbitrarily say Tuesday, May 10th, at 11:59:59 pm EST. If there
... [More]
is more than one entry, we will put it up for a community vote at that time. Oh, and the most important part -- the winner will get a coffee mug with their new logo front and center!!!Please email [email protected] with your image attached. Good luck, and happy logo-ing! Sam
Permalink
| Leave a comment »
[Less]
|
Posted
about 14 years
ago
We are proud to announce the release of yt version 2.1. This release includes several new features, bug fixes, and numerous improvements to the code base and documentation. At the yt homepage, http://yt.enzotools.org/ , an installation script, a
... [More]
cookbook, documentation and a guide to getting involved can be found.
yt is an analysis and visualization toolkit for Adaptive Mesh Refinement data. yt provides full support for Enzo, Orion, and FLASH codes, with preliminary support for RAMSES, ART, Chombo, CASTRO and MAESTRO codes. It can be used to create many common types of data products such as:
* Slices
* Projections
* Profiles
* Arbitrary Data Selection
* Cosmological Analysis
* Halo finding
* Parallel AMR Volume Rendering
* Gravitationally Bound Objects Analysis
There are a few major additions since yt-2.0 (Released January 17, 2011), including:
* Streamlines for visualization and querying
* A treecode implementation to calculate binding energy
* Healpix / all-sky parallel volume rendering
* A development bootstrap script, for getting going with modifying and contributing
* CASTRO particles
* Time series analysis
Documentation: http://yt.enzotools.org/doc/
Installation: http://yt.enzotools.org/doc/advanced/installing.html#installing-yt
Cookbook: http://yt.enzotools.org/doc/cookbook/recipes.html
Get Involved: http://yt.enzotools.org/doc/advanced/developing.html#contributing-code
If you can’t wait to get started, install with:
$ wget http://hg.enzotools.org/yt/raw/stable/doc/install_script.sh
$ bash install_script.sh
Development has been sponsored by the NSF, DOE, and University funding. We invite you to get involved with developing and using yt!
Permalink
| Leave a comment »
[Less]
|
Posted
about 14 years
ago
We are proud to announce the release of yt version 2.1. This release includes several new features, bug fixes, and numerous improvements to the code base and documentation. At the yt homepage, http://yt.enzotools.org/ , an installation script, a
... [More]
cookbook, documentation and a guide to getting involved can be found.
yt is an analysis and visualization toolkit for Adaptive Mesh Refinement data. yt provides full support for Enzo, Orion, and FLASH codes, with preliminary support for RAMSES, ART, Chombo, CASTRO and MAESTRO codes. It can be used to create many common types of data products such as:
* Slices
* Projections
* Profiles
* Arbitrary Data Selection
* Cosmological Analysis
* Halo finding
* Parallel AMR Volume Rendering
* Gravitationally Bound Objects Analysis
There are a few major additions since yt-2.0 (Released January 17, 2011), including:
* Streamlines for visualization and querying
* A treecode implementation to calculate binding energy
* Healpix / all-sky parallel volume rendering
* A development bootstrap script, for getting going with modifying and contributing
* CASTRO particles
* Time series analysis
Documentation: http://yt.enzotools.org/doc/
Installation: http://yt.enzotools.org/doc/advanced/installing.html#installing-yt
Cookbook: http://yt.enzotools.org/doc/cookbook/recipes.html
Get Involved: http://yt.enzotools.org/doc/advanced/developing.html#contributing-code
If you can’t wait to get started, install with:
$ wget http://hg.enzotools.org/yt/raw/stable/doc/install_script.sh
$ bash install_script.sh
Development has been sponsored by the NSF, DOE, and University funding. We invite you to get involved with developing and using yt!
Permalink
| Leave a comment »
[Less]
|
Posted
about 14 years
ago
It’s been nearly a month since the last yt development post; in that time, there’s been quite a bit of development in a couple different areas. This is culminating in a 2.1 release, for which Sam Skillman is release manager, sometime in the next
... [More]
few days.
Streamlines and Treecode
SamS has spent some time over the last month developing two types of streamline code. The first integrates a series of streamlines over a selection of the domain, which can then be visualizing using the mplot3d package. The other aspect of this involves selecting one of these integrated streamlines, which is then transformed into an AMR1DData object, which can be queried for values and plotted in other ways. Sam has documented both of these modes of streamline integration and they’ll be included in a new build of the documentation presently.
StephenS has been hard at work on developing a treecode for speeding up the binding energy calculation of clumps or other regions. To that end, he’s implemented not only the treecode itself (using some of the octree functionality which had as-yet been unused in yt) but also a series of tests. He sees substantial speedups, and it has been documented to become part of the next release.
Development Bootstrap and Pasteboards
The process of getting up and running with Mercurial and with development of yt can be a bit tricky. To try to alleviate that, I’ve added a new command, “yt bootstrap_dev” that will handle this. It will set up a bitbucket user, set up a couple handly hg extensions, and also create a ‘pasteboard.’ The pasteboard itself is still a bit in flux, so it’s not being touted just yet as a major feature, but I think it has some promise. The idea behind it is to create a semi-permanent mechanism for sharing scripts and so forth; it’s a versions hg repository which lives on BitBucket’s servers, from which scripts can be programmatically downloaded, embedded, or viewed.
You can see mine at http://matthewturk.bitbucket.org. If you’ve used the bootstrap_dev command, you can play with the pasteboards with “yt pasteboard” and “yt pastegrab”, but be forewarned they might not be completely working yet!
Web GUI and PlotWindow
JeffO has been hard at work rethinking and rebuilding the plotting system in yt. He’s started with the concept of the PlotCollection and thrown it out! The PlotCollection dates from a time when our mechanism for interacting with plots was actually fundamentally different; in the first few months of yt’s existence, it was operated through a GUI called ‘HippoDraw’ which operated with worksheets. The mapping was from a single worksheet to a single plot collection.
Nowadays, however, it seems that the most common use of making a plot collection is to make a bunch of plots that aren’t quite synced up in the same way that they were with HippoDraw. So Jeff has been rethinking the plotting system, sticking close to the idea of ‘conduits’ of data that are present in other systems like the AMRData system. You should be able to take a porthole into the data, toss it down, and then simply receive back an image that is visible through that porthole.
Over the last couple weeks he’s made quite a bit of progress in that, which enabled us to add it as a widget in a forthcoming GUI for yt.
…and, speaking of the GUI, we now have a quite functional protoype of a GUI working. This is the fifth (!) GUI that has been designed for yt. BrittonS, CameronH and JeffO and I took a few days and worked very hard on creating a useful, extensible, and maintainable GUI. (Britton, in fact, had one of the best quotes of the sprint. I said something like, “I’m looking forward to this GUI.” Britton replied, “I’m looking forward to this being the last GUI.” I couldn’t agree with this sentiment more.)
All of the previous versions and implementations of GUIs for yt — HippoDraw, then the original GUI called Reason and written in wxPython, then the subsequent TraitsUI wxPython Reason version 2.0, and finally the Tkinter-based Fisheye — were dependent on a whole stack of dependencies. These included things like wx, GTK, Qt, and on and on and on. These were difficult to install in an automated fashion, but more than that, they added an incredible level of complexity to the installation and to using these GUIs on supercomputer centers.
So we decided to get rid of all of that, and return to the basics. We built a Web GUI, where the widgets, the toolkits, events, and everything are all handled by a web browser, using JavaScript. The decision to do this ultimately came down to a maintainability issue — there’s no compilation necessary, everybody has a web browser, and it can be done trivially over SSH (with far less bandwidth than is required for a comparative X11 session being forwarded.)
The underlying system is just a Python interpreter; when buttons are pressed, they simply call functions that are available in the interpreter. It’s fundamentally a mechanism for issuing commands, displaying results of those commands (including images), and then on top of this we have begun adding widgets.
Cameron summarized some of this in an email to the developer list. It’s still in testing, but is available for testing, despite not being documented. This will be a big part of a 2.2 release of yt.
IRC
There’s now an IRC channel for yt, on FreeNode. There aren’t usually that many people in there (sometimes it’s just me!) but the bot from CIA.vc will echo all pushed commits to the channel. You can use a client like Adium, Irssi, or one of the other Linux or OSX clients, to connect to irc.freenode.net and then to join the channel #yt (“/join #yt”).
We’ll occasionally have development talk here, but it could also be viewed as a faster turnover mechanism for getting help, chatting about oddities, and suggesting feedback. If you’re interested in getting started developing on yt or fixing bugs, this would be the perfect way to get your feet wet. We’ll also likely have some coordinated development sessions here in the future.
See you next week!
Permalink
| Leave a comment »
[Less]
|
Posted
about 14 years
ago
It’s been nearly a month since the last yt development post; in that time, there’s been quite a bit of development in a couple different areas. This is culminating in a 2.1 release, for which Sam Skillman is release manager, sometime in the next
... [More]
few days.
Streamlines and Treecode
SamS has spent some time over the last month developing two types of streamline code. The first integrates a series of streamlines over a selection of the domain, which can then be visualizing using the mplot3d package. The other aspect of this involves selecting one of these integrated streamlines, which is then transformed into an AMR1DData object, which can be queried for values and plotted in other ways. Sam has documented both of these modes of streamline integration and they’ll be included in a new build of the documentation presently.
StephenS has been hard at work on developing a treecode for speeding up the binding energy calculation of clumps or other regions. To that end, he’s implemented not only the treecode itself (using some of the octree functionality which had as-yet been unused in yt) but also a series of tests. He sees substantial speedups, and it has been documented to become part of the next release.
Development Bootstrap and Pasteboards
The process of getting up and running with Mercurial and with development of yt can be a bit tricky. To try to alleviate that, I’ve added a new command, “yt bootstrap_dev” that will handle this. It will set up a bitbucket user, set up a couple handly hg extensions, and also create a ‘pasteboard.’ The pasteboard itself is still a bit in flux, so it’s not being touted just yet as a major feature, but I think it has some promise. The idea behind it is to create a semi-permanent mechanism for sharing scripts and so forth; it’s a versions hg repository which lives on BitBucket’s servers, from which scripts can be programmatically downloaded, embedded, or viewed.
You can see mine at http://matthewturk.bitbucket.org. If you’ve used the bootstrap_dev command, you can play with the pasteboards with “yt pasteboard” and “yt pastegrab”, but be forewarned they might not be completely working yet!
Web GUI and PlotWindow
JeffO has been hard at work rethinking and rebuilding the plotting system in yt. He’s started with the concept of the PlotCollection and thrown it out! The PlotCollection dates from a time when our mechanism for interacting with plots was actually fundamentally different; in the first few months of yt’s existence, it was operated through a GUI called ‘HippoDraw’ which operated with worksheets. The mapping was from a single worksheet to a single plot collection.
Nowadays, however, it seems that the most common use of making a plot collection is to make a bunch of plots that aren’t quite synced up in the same way that they were with HippoDraw. So Jeff has been rethinking the plotting system, sticking close to the idea of ‘conduits’ of data that are present in other systems like the AMRData system. You should be able to take a porthole into the data, toss it down, and then simply receive back an image that is visible through that porthole.
Over the last couple weeks he’s made quite a bit of progress in that, which enabled us to add it as a widget in a forthcoming GUI for yt.
…and, speaking of the GUI, we now have a quite functional protoype of a GUI working. This is the fifth (!) GUI that has been designed for yt. BrittonS, CameronH and JeffO and I took a few days and worked very hard on creating a useful, extensible, and maintainable GUI. (Britton, in fact, had one of the best quotes of the sprint. I said something like, “I’m looking forward to this GUI.” Britton replied, “I’m looking forward to this being the last GUI.” I couldn’t agree with this sentiment more.)
All of the previous versions and implementations of GUIs for yt — HippoDraw, then the original GUI called Reason and written in wxPython, then the subsequent TraitsUI wxPython Reason version 2.0, and finally the Tkinter-based Fisheye — were dependent on a whole stack of dependencies. These included things like wx, GTK, Qt, and on and on and on. These were difficult to install in an automated fashion, but more than that, they added an incredible level of complexity to the installation and to using these GUIs on supercomputer centers.
So we decided to get rid of all of that, and return to the basics. We built a Web GUI, where the widgets, the toolkits, events, and everything are all handled by a web browser, using JavaScript. The decision to do this ultimately came down to a maintainability issue — there’s no compilation necessary, everybody has a web browser, and it can be done trivially over SSH (with far less bandwidth than is required for a comparative X11 session being forwarded.)
The underlying system is just a Python interpreter; when buttons are pressed, they simply call functions that are available in the interpreter. It’s fundamentally a mechanism for issuing commands, displaying results of those commands (including images), and then on top of this we have begun adding widgets.
Cameron summarized some of this in an email to the developer list. It’s still in testing, but is available for testing, despite not being documented. This will be a big part of a 2.2 release of yt.
IRC
There’s now an IRC channel for yt, on FreeNode. There aren’t usually that many people in there (sometimes it’s just me!) but the bot from CIA.vc will echo all pushed commits to the channel. You can use a client like Adium, Irssi, or one of the other Linux or OSX clients, to connect to irc.freenode.net and then to join the channel #yt (“/join #yt”).
We’ll occasionally have development talk here, but it could also be viewed as a faster turnover mechanism for getting help, chatting about oddities, and suggesting feedback. If you’re interested in getting started developing on yt or fixing bugs, this would be the perfect way to get your feet wet. We’ll also likely have some coordinated development sessions here in the future.
See you next week!
Permalink
| Leave a comment »
[Less]
|
Posted
over 14 years
ago
This last week was the first full week on BitBucket and so far I think it has been quite successful. The new development process is for most of the core developers to maintain personal forks for experimental changes, or longer term changes, and
... [More]
then to commit directly or merge when bug fixes or features are ready to be integrated. The list of forks is easily visible and each individual fork's divergence from the primary repository can be viewed by clicking on the green arrows. All of the new mechanisms for developing using BitBucket are included in the "How to Develop yt" section of the documentation. This last week I spent a few days at KITP's Galaxy Clusters workshop, where I presented on yt. There were a few major points that came out of my visit, talking to the simulators there, that are germane to the long term development of yt. As time goes on, yt should be increasingly viewed as a mechanism not just for analyzing data with its own, internal analysis routines, but as a mechanism for handling data, transforming it into a uniform interface independent of the underlying simulation code. This will allow for linking against and utilizing external analysis codes much more easily. (The three examples that came up while I was at KITP were a new halo finder, a weak lensing code, and a radiation transport code.) To facilitate the process of calling external codes from within yt, I've written a section in the documentation that covers it. There is a great deal of interest in ensuring yt works equally well with many different simulation platforms. This is a primary goal of my current fellowship, and I am working toward it. The next two codes that will be targeted for improvement are Gadget and ART, and I made good contacts at the workshop to this end. The idea of analysis modules, particularly in a block-programming environment, is compelling. There is quite a bit of interest in an interface where inputs and outputs were handled like pipes. I am still formulating my ideas on this. Last Fall I experimented a bit with an introspection system that could handle arguments and could hook up pipes, but it never got very far. (The code.) I had a number of scientific takeaways from the meeting, too, but that would all go into a different blog post.This week I hope to finish up the adaptive ray tracing. This past week StephenS unveiled a halo serialization mechanism, which I Think many people are excited about, and SamS continued developing his streamline code.
Permalink
| Leave a comment »
[Less]
|
Posted
over 14 years
ago
This last week was the first full week on BitBucket and so far I think it has been quite successful. The new development process is for most of the core developers to maintain personal forks for experimental changes, or longer term changes, and
... [More]
then to commit directly or merge when bug fixes or features are ready to be integrated. The list of forks is easily visible and each individual fork's divergence from the primary repository can be viewed by clicking on the green arrows. All of the new mechanisms for developing using BitBucket are included in the "How to Develop yt" section of the documentation. This last week I spent a few days at KITP's Galaxy Clusters workshop, where I presented on yt. There were a few major points that came out of my visit, talking to the simulators there, that are germane to the long term development of yt. As time goes on, yt should be increasingly viewed as a mechanism not just for analyzing data with its own, internal analysis routines, but as a mechanism for handling data, transforming it into a uniform interface independent of the underlying simulation code. This will allow for linking against and utilizing external analysis codes much more easily. (The three examples that came up while I was at KITP were a new halo finder, a weak lensing code, and a radiation transport code.) To facilitate the process of calling external codes from within yt, I've written a section in the documentation that covers it. There is a great deal of interest in ensuring yt works equally well with many different simulation platforms. This is a primary goal of my current fellowship, and I am working toward it. The next two codes that will be targeted for improvement are Gadget and ART, and I made good contacts at the workshop to this end. The idea of analysis modules, particularly in a block-programming environment, is compelling. There is quite a bit of interest in an interface where inputs and outputs were handled like pipes. I am still formulating my ideas on this. Last Fall I experimented a bit with an introspection system that could handle arguments and could hook up pipes, but it never got very far. (The code.) I had a number of scientific takeaways from the meeting, too, but that would all go into a different blog post.This week I hope to finish up the adaptive ray tracing. This past week StephenS unveiled a halo serialization mechanism, which I Think many people are excited about, and SamS continued developing his streamline code.
Permalink
| Leave a comment »
[Less]
|
Posted
over 14 years
ago
The major changes this week came mostly in the form of administrative
shifts. However, SamS did some great work I’m going to hint at (he’ll post
a blog entry later) and I started laying the ground work for something I’ve
been excited about for a
... [More]
while, an MPI-aware task queue.
BitBucket
For the last couple months, yt has been struggling under the constraints of
the hg server on its hosting plan. The issue was that particular files
checked into the repository (docs_html.zip for one, which is now gone, and
amr_utils.c, also gone, for another) took a while to transfer over some
connections. During this transfer, the (shared) hosting provider on
hg.enzotools.org would kill the server process, resulting in an “abort”
message given to the cloning user.
Basically, this was kind of awful, because it meant people couldn’t clone
the yt repo reliably, and it also meant that the install script would fail
in unpredictable ways (usually indicating a Forthon or setup.py error.) I’m
kind of bummed out that I didn’t do something about this sooner; I suspect
several people probably have tried to install yt and failed as a result of
this. I added some workarounds that staged the download of yt over a couple
pulls, which usually fixed it, but there was no reliable solution.
Enter BitBucket. A few of the developers had been using BitBucket for
private projects, small repositories, and even (especially) papers that we’d
been working on. For a while we’d been talking about moving yt there and
trying to leverage the functionality it brings for Distributed Version
Control Systems — like forking and pull requests, social coding, and on and
on — and last week we hit the breaking point. So we created a new user
(yt_analysis) and uploaded the yt repo, the documentation repo, and the
cookbook, and we’re going to be conducting our development there. The old
addresses should all still work — we have forwarded
hg.enzotools.org to the new location.
One of the coolest aspects of this is that anyone can now “fork” the yt
repository. What this means is that you can then get your own private
version, which you can then make changes to very easily, and then submit
them back upstream. I’m really excited about this and I would encourage
people to take advantage of it. I’ve rewritten the Developer
Documentation to
describe how to do this.
All in all, I think this will be a very positive move. BitBucket has a
number of value adds, including the forking model, but we should also
immediately see a dramatic increase in the reliability of the repository.
Streamlines
SamS has done some work implementing streamlines. Right now they operate by
integrating using RK4 any set of vector fields, and then plotting their
paths using Matplotlib’s mplot3d command. He’s working on some cool ways to
colorize their values, and one of the things I am pushing for is to take any
given streamline and convert it to an AMR1DData object. This would enable
you to, for instance, follow a stream line in magnetic fields and calculate
the density at every point along that streamline.
Once Sam’s comfortable with the feature as-is, he’s going to blog about it,
so I won’t steal the thunder for his hard work here.
Task Queues
Building on the ideas behind the time series analysis I started
work on the idea of a task queue that’s MPI aware. When this is finished
being implemented, it will act as a mechanism for dispatching work, which
will be fully integrated with time series analysis. Right now it’s not even
close to being done, but a few pieces of the architecture have been
implemented.
The idea here is that you will be able to launch a parallel yt job, but have
it split itself into sub-parallel tasks. For instance, it you had 100
outputs of a medium-size simulation to analyze, you would write your time
series code as usual — you would describe the actions you want taken, how
to output them, etc etc. You would then launch yt with a “sub-parallel”
option, saying that you wanted to split the total number of MPI jobs into
jobs of size N — for instance you could launch a 64 processor yt job,
telling it to split into sub-groupings of 4 processors each. Each output
would then be distributed in a first come first serve fashion to each of the
processor groups. When each group finished its job, it would ask for the
next job available, and so on. When completed, the results would be
collated and returned.
I’m excited about this, but right now it’s in its infancy. I’ve constructed
the mechanisms to do this within a single process space, with no
sub-delegation of MPI tasks. The process of implementing this and properly
integrating it with time series analysis is going to be a long one, but I am
setting it as a task for the next major release of yt. If you’re at all
interested in this, drop me a line, and I’m happy to show you how to get
started testing it out.
Permalink
| Leave a comment »
[Less]
|
Posted
over 14 years
ago
The major changes this week came mostly in the form of administrative
shifts. However, SamS did some great work I’m going to hint at (he’ll post
a blog entry later) and I started laying the ground work for something I’ve
been excited about for
... [More]
a while, an MPI-aware task queue.
BitBucket
For the last couple months, yt has been struggling under the constraints of
the hg server on its hosting plan. The issue was that particular files
checked into the repository (docs_html.zip for one, which is now gone, and
amr_utils.c, also gone, for another) took a while to transfer over some
connections. During this transfer, the (shared) hosting provider on
hg.enzotools.org would kill the server process, resulting in an “abort”
message given to the cloning user.
Basically, this was kind of awful, because it meant people couldn’t clone
the yt repo reliably, and it also meant that the install script would fail
in unpredictable ways (usually indicating a Forthon or setup.py error.) I’m
kind of bummed out that I didn’t do something about this sooner; I suspect
several people probably have tried to install yt and failed as a result of
this. I added some workarounds that staged the download of yt over a couple
pulls, which usually fixed it, but there was no reliable solution.
Enter BitBucket. A few of the developers had been using BitBucket for
private projects, small repositories, and even (especially) papers that we’d
been working on. For a while we’d been talking about moving yt there and
trying to leverage the functionality it brings for Distributed Version
Control Systems — like forking and pull requests, social coding, and on and
on — and last week we hit the breaking point. So we created a new user
(yt_analysis) and uploaded the yt repo, the documentation repo, and the
cookbook, and we’re going to be conducting our development there. The old
addresses should all still work — we have forwarded
hg.enzotools.org to the new location.
One of the coolest aspects of this is that anyone can now “fork” the yt
repository. What this means is that you can then get your own private
version, which you can then make changes to very easily, and then submit
them back upstream. I’m really excited about this and I would encourage
people to take advantage of it. I’ve rewritten the Developer
Documentation to
describe how to do this.
All in all, I think this will be a very positive move. BitBucket has a
number of value adds, including the forking model, but we should also
immediately see a dramatic increase in the reliability of the repository.
Streamlines
SamS has done some work implementing streamlines. Right now they operate by
integrating using RK4 any set of vector fields, and then plotting their
paths using Matplotlib’s mplot3d command. He’s working on some cool ways to
colorize their values, and one of the things I am pushing for is to take any
given streamline and convert it to an AMR1DData object. This would enable
you to, for instance, follow a stream line in magnetic fields and calculate
the density at every point along that streamline.
Once Sam’s comfortable with the feature as-is, he’s going to blog about it,
so I won’t steal the thunder for his hard work here.
Task Queues
Building on the ideas behind the time series analysis I started
work on the idea of a task queue that’s MPI aware. When this is finished
being implemented, it will act as a mechanism for dispatching work, which
will be fully integrated with time series analysis. Right now it’s not even
close to being done, but a few pieces of the architecture have been
implemented.
The idea here is that you will be able to launch a parallel yt job, but have
it split itself into sub-parallel tasks. For instance, it you had 100
outputs of a medium-size simulation to analyze, you would write your time
series code as usual — you would describe the actions you want taken, how
to output them, etc etc. You would then launch yt with a “sub-parallel”
option, saying that you wanted to split the total number of MPI jobs into
jobs of size N — for instance you could launch a 64 processor yt job,
telling it to split into sub-groupings of 4 processors each. Each output
would then be distributed in a first come first serve fashion to each of the
processor groups. When each group finished its job, it would ask for the
next job available, and so on. When completed, the results would be
collated and returned.
I’m excited about this, but right now it’s in its infancy. I’ve constructed
the mechanisms to do this within a single process space, with no
sub-delegation of MPI tasks. The process of implementing this and properly
integrating it with time series analysis is going to be a long one, but I am
setting it as a task for the next major release of yt. If you’re at all
interested in this, drop me a line, and I’m happy to show you how to get
started testing it out.
Permalink
| Leave a comment »
[Less]
|