Astrophysicists release IllustrisTNG, the most advanced universe model of its kind


 Share

Recommended Posts

 

Quote

 

Astrophysicists release IllustrisTNG, the most advanced universe model of its kind

 

Novel computational methods have helped create the most information-packed universe-scale simulation ever produced. The new tool provides fresh insights into how black holes influence the distribution of dark matter, how heavy elements are produced and distributed throughout the cosmos, and where magnetic fields originate.

 

newuniverses.jpg

 


 

The IllustrisTNG project

  • Like 2
Link to comment
Share on other sites

I applaud their efforts, but the physics model is still incomplete. The missing mass problem is still unsolved, and there's still a lot of questions regarding dimensionality that the Theorists are still working on. Until then these simulations aren't going to be accurate.

 

Seriously, I get that they want (and need) to keep working on these, and that's all well and good. I personally find them a bit misleading because I know better. /shrug

Link to comment
Share on other sites

16 minutes ago, Unobscured Vision said:

I applaud their efforts, but the physics model is still incomplete. The missing mass problem is still unsolved, and there's still a lot of questions regarding dimensionality that the Theorists are still working on. Until then these simulations aren't going to be accurate.

Of course it's not going to be completely accurate but if it can help further out knowledge and understanding then it must be applauded.

Link to comment
Share on other sites

Is anyone actually in a position for claiming they they have most information-packed universe-scale simulation ever produced.  :/

Link to comment
Share on other sites

This topic is now closed to further replies.
 Share

  • Recently Browsing   0 members

    No registered users viewing this page.

  • Similar Content

    • By Usama Jawad96
      Microsoft releases first SimuLand dataset publicly to facilitate security research
      by Usama Jawad



      A month ago, Microsoft announced the open-source SimuLand initiative which allows security researchers to deploy lab environments, reproduce attack patterns and techniques, and then test whether tooling such as Microsoft 365 Defender, Azure Defender, and Azure Sentinel can detect adversarial patterns. Researchers can also capture telemetry from these experiments to extend their own research. Now, Microsoft has released a public dataset from the first simulation exercise.

      For those curious about how Microsoft generated this dataset, it was a result of collecting the telemetry from running the first simulation activity in the lab guides. The simulation in question is about how attackers can steal the Azure Directory Federated Services (ADFS) token-signing certificate from an on-premises ADFS server and then utilize it to sign a new Security Assertion Markup Language (SAML) token that can be used to access mail data from the Microsoft Graph API.

      The dataset is a collection of security events that occurred during the simulation. Some of them can be seen below:

      Image via Microsoft The security logs have been collected through the Microsoft 365 Defender Advanced hunting API and the Azure Log Analytics workspace API. Microsoft says that sharing this dataset will allow researchers to better analyze adversarial scenarios, improve their detection rules, model the chain of events, automate simulation plans, and plan hackathons and challenges internally.

      Microsoft has also vowed to release more datasets and add new lab guides. You can find out more about the SimuLand initiative on GitHub here and check out the first SimuLand dataset on the GitHub-powered Security Datasets repository here.

    • By Namerah S
      Doraemon Story of Seasons lands on PlayStation 4 on September 4 in the west
      by Namerah Saud Fatmi

      After seeing a PC release through Steam and crossing over to the Nintendo Switch platform in October 2019, Bandai Namco's farming simulation game, Doraemon Story of Seasons, is now making its way over to Sony's PlayStation 4 platform. As announced by the Japanese video game maker today, the title will be available in the U.S. and Europe on September 4.

      As for other regions, the newest Doraemon game will be launching in Southeast Asia and Japan on July 30 - a little over a month before the west. Bandai Namco Europe detailed the key features of the upcoming title:

      While the price for the game is yet to be announced, it can be assumed that it will be similar to the cost of the PC variant, set at $39.99 on Steam.

      A special video game trailer to announce the PlayStation 4 release of Doraemon Story of Seasons was also posted today, seen above.

    • By Ather Fawaz
      Nvidia's new algorithm⁠—6-DoF GraspNet⁠—helps robots pick up arbitrary objects
      by Ather Fawaz

      Image via Nvidia Research Nvidia Research has been making strides in using deep learning to train models for various tasks. Recently, the company clocked the fastest training times for BERT and trained the largest ever transformer-based model.

      However, as expected, algorithms based on deep learning require a large dataset to begin with, and that is a luxury in many situations. Along with continuing research using deep learning, the company focused its efforts in another direction as well. And the firm, at its Seattle Robotics Lab, developed a novel algorithm, called 6-DoF GraspNet, that allows robots to grasp arbitrary objects.

      6-DoF GraspNet works as follows. The robotic arm observes the object and decides where to move it in 6D space (the x, y, z coordinate plane in space and three dimensions for rotation). The algorithm is designed in such a way that it generates a set of possible grips with which the object can be grasped and then moved about as needed. Then the set of possible grips is run through a "grasp evaluator" that assigns a score against each possible grip. Finally, the grasp evaluator improves the success probability of the best grip by tweaking the grasp variant with local transformations. The process is summarized in the following picture:

      What is interesting here is that the researchers opted for "synthetic training data" over deep learning-based methods, that the company evaluated using the Nvidia FleX, which is a particle-based simulation technique for generating visual effects in real-time. The technique is detailed as follows in the blog post together with a GIF demonstrating the evolution of the grasps:



      According to the researchers at Nvidia, one of the biggest advantages of 6-DoF GraspNet is that it can be used to pick up arbitrary objects. Second is its modularity that allows it to be used in a variety of computer vision applications and motion-planning algorithms. Third that it can be used with a model that assigns shapes to various objects based on their 'point cloud', which will ensure that the robotic arm does not collide with any obstacles in its way as they will be represented by 'cubes' for the motion planning module. The results, both for isolated and crowded objects, are evident in the video embedded above.

      Nvidia plans on showcasing the 6-DoF GraspNet in October/November at the International Conference on Computer Vision 2019 in Korea.

      For more details, you can study the research published here.

    • By Alexa Port
      Hello guys please put your valuable suggestions in comment box about simulation games.
      Guys help me to find out which simulation game is best for playing in mobile.
    • By Abhay V
      Metamoto, a Silicon Valley startup brings "Simulation as a Service" to autonomous vehicles
      by Abhay Venkatesh

      Metamoto, a Silicon Valley startup that recently raised $2 million in its Series-A funding, announced that it is collaborating with leading automotive players to launch an early engagement program. This program will help the company learn via an “accelerated feedback loop to enable iteration” before the launch of the company’s highly-anticipated ‘Simulation as a Service’ offering.

      In simpler terms, the program is aimed at gathering data and feedback from multiple parties playing in the autonomous driving arena, for training, testing, debugging and validating workflows of systems that are used in self-driving vehicles, but within the confines and safety of virtual computer simulation. The company adds that the participants that form the testing group were chosen to ensure that perspectives are represented from across the transportation industry including OEMs, Tier-1 suppliers, transportation network companies (TNCs) and stack, sensor and other technology providers.

      With the advent of cars that can partially drive themselves by leveraging a wide range of radars and stereo cameras fitted onboard, the next logical step would be to achieve complete autonomy. Developing such systems requires extensive research on the software front, coupled with exhaustive testing by putting these systems through their paces in real-life scenarios to gather substantial data and drive software refinements. However, with real-life testing come real-life hazards. Until these systems achieve perfection, there remain concerns regarding the safety of these tests. The death of a pedestrian in a fatal accident involving a self-driving Uber vehicle and the recent Tesla vehicle crash are examples of what could go wrong in testing, or when more faith is put in these systems than they can currently deliver.

      Project Road Runner, a Microsoft Garage project, aims to use simulation to be able to mitigate the risks involved in real-world testing. It does so by conducting controlled computer simulation tests that can be scaled in order to simulate various possibilities that are otherwise time-consuming in the real-word. Metamoto is another such company that intends to offer software simulation solutions for autonomous vehicles. Using the virtual environment ensures that no fatalities occur and that the system can learn from repeated tests and mistakes at magnitudes that are impossible in the real world.

      Chad Partridge, CEO of Metamoto, added:

      The company suggests that the simulation products can scale and deliver precise simulation of a variety of sensors including LiDAR, camera, radar, GPS, IMU, and others. The simulations run a variety of tests, mirroring unique cases and learning from all captured data to “identify isolated outcomes, performance boundaries, and system tolerances”.

      As for how long it will be before a truly autonomous driving vehicle is mass-produced is unknown, but the technology that will enable such a system seems to be catching up.

      Source: Metamoto