Q. What do you see for the future of structured grid only flow solvers? Will unstructured methods advance to the point at which older structured grid codes will be unnecessary or will structured methods always have their niche market? How long does CFX plan to support its structured grid only flow solvers?
For structured grid ONLY solvers, the future is bleak indeed! The inherent flexibility and performance advantages of unstructured, hybrid (i.e. multi-element types) meshes will ensure that. Most mainstream unstructured mesh CFD codes also support hexahedral elements, so they have backward compatibility to existing multi-block structured meshes.
So the real question is: what role do structured grids have and in what applications? First of all, there is a large established precedent of using structured meshes for several important applications, e.g. main gas path of rotating machines. Users have invested a great deal in validating results on these meshes and in creating processes that generate them. Given the reliance on these approaches and the confidence that has arisen in them, I expect that they will continue to be used for quite some time. Secondly, it is clear that for some geometries, e.g. long skinny pipes, structured highly anisotropic meshes distribute nodes much more efficiently than current unstructured mesh techniques do, as stretched structured meshes do not over-resolve the streamwise direction.
However, it is important to stress the following fundamental points about state-of-the-art unstructured meshing:
1 - For complex geometries, the cost saving on human setup time is very significant. Thus a pretty good mesh that you can afford to generate is better than a great mesh that you do not have the resources to create.
2 - Advanced unstructured meshes, such as the hybrid element technique available in CFX-5, very nicely resolve boundary layers through the anisotropic prism layer, and have numerous automatic algorithms to resolve various geometric details (e.g. curvature and proximity sensitivity). Our tests indicate that with second order numerics schemes, they have very satisfactory accuracy (first order results can be very diffusive). When comparing "accuracy per node" the issue is more complicated and is usually a function of how stretched the structured mesh is, as discussed above. In other words, a hand-crafted stretched structured mesh on an anisotropic geometry will be able to get a given degree of accuracy with less nodes than an isotropic unstructured mesh. However, the overall question of "cost per accuracy" - which includes the user setup time - may well still favor the quality unstructured mesh. Many demanding users now rely on these high quality meshes for difficult applications where accuracy is key, e.g. aerodynamics.
3 - The unstructured mesh generation techniques continue to improve. Also solution adaption plays a role in the ultimate meshing strategy. Finally, there are developments ongoing to more automate unstructured hexahedral element meshing to produce multi-block structured meshes. Thus I expect that the quasi-automatic unstructured meshing technologies will evolve to include all element types and use each as optimally as possible, and this includes hexahedral element meshes.
The answer to the last part of this question is that the CFX business will continue to support customers using our mature structured mesh codes for as long as our customers use them. With respect to new developments, the situation is that there will be no further releases of CFX-4, and that after the next major release of CFX-TASCflow (version 2.12, available in October) there will only be one further maintenance release. CFX development is now focused on CFX-5 and our special purpose tools for vertical applications.
Q. A recent development is that industry is using more and more clusters made out of mass produced hardware (common PC's). This results in massive CPU power at a reasonable price. However since commercial CFD packages often still charge (license costs) per CPU this conflicts with the hardware development. We (industry) are forced to invest in expensive, relatively fast CPU's (RISC) to reduce overall costs in hardware + software. Is this license strategy ever adapted to these directions in hardware?
CFX's licensing costs distinguish between serial and parallel usage. The costs for licensing additional CPU's for a parallel setup are less than 10% of our base price, including for PC clusters.
Q. How do you balance the prioritizing between maintenance type requests (e.g. a button that did x, y, z would be useful) and software changes that have much better return on investment (i.e. new product lines, competitively differentiating functionality). In my opinion the two are always contradictory but both necessary.
The distinction appears to be one between fundamental new capability and usability issues. Both are important to users and both are required to provide valuable solutions to customers - I would not say they are in contradiction. The broader issue is one of how you manage requirements and that is an integral part of the Software Engineering process at CFX.
Q. Is it inevitable that CFD becomes completely integrated with solid modeling software such as Pro/ENGINEER or CATIA? If so, can the integration be done well by two separate companies working together or do you see the large CAD companies developing/buying up CFD software so that they can integrate it into their software?
I think CFD will evolve in several directions, as needed by differing customer bases and applications. e.g. R&D experts who do trouble-shooting would want the fully general and flexible standalone CFD product, many design engineers will want the special purpose or "vertical application" customized versions, non-CFD users would no doubt prefer CAD integrated versions. The key to making all these options technically possible from the commercial CFD vendor point of view is to develop a flexible, modular and portable architecture for the CFD technology - which we have in CFX-5. With respect to how CAD integration would be managed, again I see a spectrum spanning; interfacing to public standards, corporate partnerships, and industry consolidation.
Q. Is there an industry in which you could say that CFD is truly under-utilized? Why is that the case? If cost is the primary issue (most likely), have you considered marketing a reduced price version of your software to such an industry?
Personally, it strikes me that most industries are under-utilizing CFD! Certainly fluid flow, heat transfer, etc., exist in almost all engineering applications and one would think that knowing more about it ought to be worthwhile. Discussions on what are prerequisites for increased penetration into these areas certainly do occur at CFX. A strategy, of course, is the use of special purpose tools. Another that we at CFX are excited about is "Enterprise Accessible Software Applications", which, among other things, allows users to create their own special purpose tools for their own businesses, which could facilitate greater utilization, probably better than an externally developed solution.
Q. Is there an inherent limit to the accuracy of CFD predictions which will always remain due to the physics involved? For instance, in your opinion, will highly accurate turbulence and combustion models ever be practical for a design calculation or are the resource demands simply too high?
If you look at CFD progress on, say, a decade long interval, then it is very clear that great strides are being made in accuracy and applicability to more and more complex applications. This will definitely continue, both from a scientific modeling and CPU power point of view. There are very few areas which appear to be inherently too resource intensive (e.g. Direct Numerical Simulation of submarine flows) given the ongoing rate of progress. It is not a question of "if" but only "when".
Q. Could you imagine that Open Source Software development can play a similar role in CFD in near future like in the operation system world (Linux vs. Windows vs. established Unixes) today?
We have tossed these ideas around as we review how we develop CFD software. Personally I struggle to see how this model can work with something as complex as CFD. Too much of the CFD development work requires too high a degree of coordination to fit that model in my opinion. Certainly with Openess initiatives like flexible User Fortran we have already seen that customers have independently developed useful modules for their own applications, but this is always within a larger, pre-defined framework.
Q. This question is a general question about the future of CFD. I am not "complaining" about any AEA CFX product. From 1999 to 2003, microprocessors have gained about 10X in computational performance. Chips now have a peak floating point performance of around 4GFlops. However, for real world applications, this is definitely not the case. Typical CFD codes are about 10% of peak on x86 systems. Many have claimed this is because of memory performance, but this is partly the case. For instance, CFD codes assume a flat memory system, and rely heavily on a compiler to improve performance. Compilers rarely fix bad computer programming, so bad performance is the result. The only way to get great performance is to rewrite the entire code. Obviously, this is getting old, and is not economical. My question is, do you think there is a way to get good serial and parallel performance, on the latest processors, without having to rewrite an entire CFD solver? (In other words, separate the algorithm from the microprocessor's inner workings?) Thanks, Tony
I'm not sure I understand or agree with the premise. Our current software architecture in CFX-5 certainly routinely demonstrates excellent parallel performance (e.g. scalably achieving near 100% efficiency on a variety of parallel configurations for sufficiently large meshes). And we work closely with the hardware vendors to ensure that our software and the compilers produce good binary code, e.g. the Itanium series of 64 bit chips. Certainly if a new chip architecture required a software re-write then it would be unlikely that we would do it due to the enormous costs of developing general CFD software (I would suspect that such a chip would also not be very successful commercially!).
Q. How do you define your release schedule as there are often conflicts:
Users want new functionality and enhancements
Users do not want to install and test new versions
Do you follow a scheme where you define what will be in a release and then let this define a date or vice versa, define a date and see what makes the release?
Generally it is more the latter, but really it is an iterative procedure. We like a release schedule of about 9 months, as this is frequent enough to ensure steady progress from new versions, but not so frequent that the overhead of testing, de-bugging, and releasing a new version doesn't dominate the development efforts. The Development Managers assess what can be done under various timing scenarios, and in conjunction with our business leaders and customers, we settle on a balance - sometimes requiring less or more than 9 months.
|