FEA Specialists-take note! How to get the most out of your Abaqus tokens

Dassault is one of the world’s top enterprise software vendors by revenue, and the largest EDT (engineering design tool) supplier. Abaqus is one of their Finite Element Analysis (FEA) products which is widely used for simulations for computer-aided engineering, or CAE. Abaqus was acquired in 2005 by Dassault, and there are 2 license manager options: Dassault’s own product DSLS or Flexera’s Flexnet which is designed to handle token licenses. The token system works quite well for Abaqus, because of the nature of FEA software, which is often run on multiple processors with no or only occasional intervention by the user who scheduled the job. The number of tokens required for a job is dependent on:-

  • the number of simultaneous users
  • the number of parallel processors (CPUs or GPU) that are needed
  • at least 1 token for the CAE pre-processor, depending on number of users

So, while using multiple parallel processors will cut down the time to run the simulation to a fraction of the time it would run with a single CPU, there is an extra cost incurred for each processor used. Below is an illustration of token costing for various Abaqus products:-

Keeping track of token usage can be quite a complex task, and is even more complex when you are running FEA software on multiple CPUs. It can also be a challenge weighing up time taken to process the simulation against the cost of tokens. This is why GPUs have become so popular for intensive computations like FEA.  Tests by Abaqus working in tandem with Nvidia have found that Abaqus Standard runs 3.7 times faster when using a GPU.

Source: https://www.nvidia.com/en-us/data-center/gpu-accelerated-applications/abaqus/

Not only is processing time faster, it is cheaper. The token calculation when using one GPU in addition to the CPUs is the same as for the CPUs without a GPU. In other words, looking at the graph above, both methods will use 12 tokens, but results are produced in 1/4 of the time. The amount of acceleration improves for very complex calculations and very high numbers of degrees of freedom (DOF), and may not make a marked difference for smaller computations with a smaller number of DOFs.

This token pricing model applies irrespective of the number of CPUs utilized, as the graph below, illustrates.

Source: NVIDIA GPUs Accelerate Dassault Systèmes SIMULIA’s Abaqus/Standard FEA Solver  

https://www.nvidia.com/object/tesla-abaqus-accelerations.html

The improvement in processing time is shown below as this graph from a white paper by Nvidia illustrates, based on a study at Rolls Royce.

Source: White Paper – Accelerating Abaqus Computations Using NVIDIA GPUs

Possibly an even more important benefit is the energy savings from using the energy-efficient GPUs. The graph below, taken from the same white paper, shows energy savings when a GPU is added to the computation.

Source: White Paper – Accelerating Abaqus Computations Using NVIDIA GPUs

It clearly makes good sense to start incorporating GPUs into any complex calculations, such as FEA, where the software products cater for it.

There is further good news for Simulia customers, because Dassault has come up with a new token licensing scheme for three additional products, namely Isight, Tosca and FE Safe, which have been bundled together in an extended licensing scheme.

Good News for OpenLM Customers

Customers who use Abaqus have been able to apply OpenLM in managing their Abaqus licenses and tokens for some time. We are pleased to announce that we have recently enhanced our product in answer to a request from a leading research company who use OpenLM to manage their licenses. They asked us to provide license management for the Nvidia GPUs; Nvidia provides a license manager, but for ease of use, the company wanted just one tool to manage the Nvidia licenses as well as their computational software. It must be noted that the Nvidia license manager does not report on usage, an essential requirement for control and optimization. We have been able to develop this solution and it is now available for any of our customers who apply GPUs in their IT environment. We recognise that GPUs are used for a variety of applications, from crypto mining to VDI (virtual desktop infrastructure) installations. Even more conventional engineering tools, such as AutoCAD, are being boosted by the use of GPUs. Now their licenses can be managed through OpenLM as well.

Please follow and like us:

Good News for High-Performance Computer Users

The development of a virtual GPU (vGPU) by Nvidia has enabled many organizations to recalibrate the productivity of their installation to a high-performance model, using GPUs instead of CPUs for processes and applications that require large computational power. It has also added another license manager to the toolbox of license applications the company has to administer. Following a customer request, OpenLM has developed a solution for managing GPU licenses. 

GPU license management is essential for compliance especially in VDI environments

Graphics Processing Units (GPUs) are becoming very popular as an alternative for CPU processing, especially for the heavy computational work required in engineering and science. Running simulations using graphics processing can give a processing improvement; a user of Ansys Fluent can accelerate his computation to be at least twice as fast up to 3.7 times faster, depending on the class of GPU used. The leading supplier of GPUs is Nvidia, which has 49% of the market; what was originally designed as an aid to gaming and desktop graphics is now an indispensable aid to engineering applications, such as CAE (computer-aided engineering). There is even a trend towards using GPUs for standard office productivity, like Windows 10, which requires 30% to 50% more graphics processing power, depending on whether one is working at operational or applications level.

Typical situations where high-performance computing is needed are:

  • architects, designers and engineers who use CAD, CAE and CAM software
  • “Miners” of cryptocurrency who utilize extensive processing power to solve their blockchain algorithms
  • researchers who use AI and machine learning for new discoveries in healthcare, automotive and robotic design and other disciplines
  • and even regular users of widely used software like Windows, Office and Adobe, which require increased  graphics capability with each new release

What many CIOs are also doing is moving to a VDI (virtual desktop infrastructure) architecture. Instead of upgrading or replacing desktops and laptops on a regular basis to increase the processing capability, upgrades are made to the VDI, which is where the processing occurs; the user just accesses the application they want using their own device and the VDI executes the processing and holds the data. This adds a new level of security, if a user’s phone, tablet or laptop is stolen, the thief cannot access anything of value to the company. Vital company information is centralised and secure and cannot be left on a bus or in a taxi by accident. Theft of the device does not give the thief any vital information, because it is all kept on premises. Using a VDI also obviously saves on the capex budget, because less hardware has to be bought. However, the use of GPUs adds another set of software licenses that have to be managed.

There are two types of VDI setup, which have much in common with conventional software licenses, persistent and non-persistent VDIs:

  • A persistent VDI is a “desktop” in the cloud service that is linked to a specific user, similar to a named user software license.
  • A non-persistent VDI is a “floating” desktop”. The user accesses the desktop, applies it to the task at hand and returns it to the “pool” making it available to the next resource. This is similar to a concurrent user software license, which is not tied to any particular user.

While managing licenses for a “named” user is straightforward, as it works on a one-to-one relationship between user and VDI, the non-persistent VDI is more complex, because any user can access the VDI and release it for use by another user. Another licensing consideration relates to complex simulations and calculations where multiple parallel processors are used, such as Simulia’s Abaqus. In order to ensure license compliance, Nvidia provides a license manager application, but one of our customers requested a better solution.

The customer, a seasoned user of OpenLM software, had been using the product to monitor the specialized software that it uses to perform simulations and complex mathematical calculations. They conduct research on products and innovations for a wide range of industries and are reliant on GPUs and high-performance computing to execute their work.  The benefit of using OpenLM for them was that they could bypass all the different license managers from the various vendors and use a single product for managing access to licenses and optimizing performance and productivity. They wanted the convenience of managing their Nvidia licenses without having to use another license manager tool, as well as ensuring that they were compliant with their license agreement at all times.

The OpenLM development team studied what was required and came up with the desired solution within a few weeks. As our customers are mainly in engineering, science and tech, most of them either already use GPUs or are in the process of making the switch.  We are happy to announce that we can now assist them in monitoring their GPU usage and compliance alongside their license administration of their spatial, mathematical and engineering software. While the Nvidia licenses are relatively cheap when compared to a product like Dassault’s Catia or even AutoCAD, companies that perform extensive calculations, or are involved in AI can have thousands of GPU licenses, which puts manual management out of the question. Even a customer that has only a small investment in GPUs can benefit, because they are using a common license manager for all the software products that they need to administer.

Please follow and like us: