Skip to content
Snippets Groups Projects

Repository graph

You can move around the graph by using the arrow keys.
Select Git revision
  • bugfix/fix_rigidbodymotion_difference
  • decasteljau
  • feature/ARRN-mod
  • feature/HM-numericalBenchmark
  • feature/HarmonicmapsBenchmark
  • feature/SimoFoxWithLocalFEfunctions
  • feature/bendingIsometries
  • feature/bendingIsometries-PBFE-Stiefel
  • feature/harmonicmapsAddons
  • feature/introduceRetractionNotion
  • feature/riemannianTRaddons
  • feature/simofoxBook
  • fix-fd-gradient-scaling
  • fix_localrodassembler_compiler_error
  • issue/vtk-namespace
  • make_rod-eoc_run
  • master default
  • releases/2.0-1
  • releases/2.1-1
  • releases/2.10
20 results
Created with Raphaël 2.2.025May23221614131223Apr9824Mar2221191022Jan913Dec12109865328Nov151331Oct30Sep1664312Jul1185328Jun2714Feb24Jan32130Dec16Nov121196517Sep1318Jul30May25Apr2417159515Mar9825Feb2115987626Jan14121154120Dec1965325Nov2019161514124328Oct272625242321201918171614131211430Sep252222Aug19[bugfix] The solver must always use the global Dirichlet vectorWrite second order elements if the function space is a P2 spaceDon't downsample when the input basis is P1 anywayRemove trailing whitespaceTemporary: Add the z-coordinate of the deformation as a scalar fieldUse the new CosseratStrain object, but without any optimizations yetWrite to VTK instead of AmiraMeshRename members of TransferVectorTuple, for better readabilityStart an entity-based interface for GlobalIndexMore cleanupSome code cleanupCode cleanupDo not modify the stiffness matrix to account for Dirichlet nodesIntroduce methods reduceAdd and reduceCopy for the MatrixCommunicatorFix screen outputSome cleanupAdd a method VectorCommunicator::scatter, and use itUse the new methods VectorCommunicator::reduceAdd and ...::reduceCopyIntroduce new methods reduceAdd and reduceCopyArgg, can't use FE basis size in parallel...[bugfix] Use FE basis to determine size of toplevel hasObstacles arrayComplete the parallelization for the 1st-order caseUse transfer operator even for the size of the toplevel hasObstacle fieldUse the transfer operators to determine the sizes of the hasObstacle arraysPrint status messages only if our rank is '0'Distribute energy and model decrease over all processorsSecurity branch to record the state before the riemanniantrsolver receives parallelization patchesUse 'fmin' instead of 'min', and 'fmax' instead of 'max'Do not have mpiHelper_ as a member of RiemannianTrustRegionSolver after allAllow different index objects for matrix rows and columnsAdd various helper functions needed to parallelize the GFE codeImplement distributed energy computationDistribute the grid over the processors after refinementCreate an MPIHelper object, and hand it over to the RiemannianTrustregion solverAssemble only on Interior_PartitionHand-code one specific matrix-matrix multiplicationCall MPIHelper::instance, we are starting to get this code parallelizedFix boundary values for the new smaller domainAdd missing includeFix domain dimension in the Taylor/Bertoldi/Steigmann example
Loading