Hello
I started severel simulations (flow/particules) but they took a lot of time (more than 600min.).
There is a way to improve that?
thank you
Hello
I started severel simulations (flow/particules) but they took a lot of time (more than 600min.).
There is a way to improve that?
thank you
Hi @mcusmariu,
well the most efficient way would be to increase your cores
Can you share your project with us? That makes it easier to identify parameters that we could optimize in your simulation.
Edit: Increasing your cores does not always reduce simulation time.
Best,
Jousef
Hi jousefm,
the nr of the cores are on max. 32
the name of my project is SD-LFP-03. can you find it in the public projects?
thank you
Hi @mcusmariu,
could not find it in your Dashboard. Here’s an instruction on How to share a project.
Best,
Jousef
shared it
Hey @mcusmariu!
I got an input from our CFD expert over your problem. There are certain thing you can do to make your simulation faster.
Use only 4 cores for your simulation. 32 are too much considering your mesh element size of nearly 9000. Higher machine means the overhead is mainly in communication between processors.
Increase Courant number to may be 0.9
Change Write Interval from Runtime to Adjustable Runtime.
I hope these changes will help you to make your simulation faster. In case you run in to memory problems, choose 8 cores instead.
Best,
Ahmed
Hi ahmedhussain18,
I will try the adjustments that you suggested.
Point 2 - Courant number - can you give me more dtails i didn’t find it.
thank you
Hi @mcusmariu!
Please see here: CFL condition
You will find it on the platform under Simulation Control
Best,
Ahmed
Hello Ahmed,
I increased to 0.9 and I’ll start a new simulation.
I’ll keep you informed if there is an improuvement.
thank you
Mario
Hello again,
We have an improvement 106 min. vs. 173 min but steel very very long for a 7 seconds simulation.
Hey @mcusmariu!
I see that you used a number of cores, and that is not recommended, as mentioned by @ahmedhussain18. I shall like to comment on it, hoping this might help in future simulations.
The parallelization in general works either at the loop level or the domain level. In the default domain-level method, the model is broken into various topological domains, each with nearly the same amount of computational expense, and the computer assigns each domain to a processor. These domains then undergo analysis independently, and information is passed between the domains through a thread or MPI based method.
The loop-level method parallelizes low-level loops (like element, node, contact pair operations) that are responsible for most of the computational costs.This speeding up may be significantly less that what can be achieved through domain-level parallelization, and will vary depending on the features of the analysis. Some analysis, like general contact algorithm and kinematic constraints, do not make use of parallel loops at all. The results of this method are not dependent on the number of processors, but use of large number of processors (more than 4) may scale poorly.
In transient simulations (like the particle method), I have personally seen that until the number of particles is more than 10000, using a larger number of processors actually slows the real time taken for computation.
To speed up your simulation, especially if you are running the simulation for a long time period, I can suggest you try mass and time scaling. Both these methods can be a bit tricky to handle, so I suggest care, but if done properly, the simulation speeds up substantially.
Do not use a high number of particles- only as many as required. This may sound very general advice, and I see that you are not making this mistake, but I have seen many users use an excessively fine mesh/high number of particles, and hence lose a lot of time waiting for the simulation to run.
I hope this helped.
Regards,
Vikrant Srivastava