Shall we start? My name is Jens Jørgen Who else is here? i'm here --> olga_ (82e9cc93@gateway/web/freenode/ip.130.233.204.147) has joined #gpaw hi Hi Olga and Marcin olga_: I just saw your email - I'll take a look later ... ok. it looks like the syntax in the parametrized xc got mixed somehow maybe is only that Whay do you mean? before would use 'PXC:1.0*GGA_X_PBE' now it would be '1.0_GGA_X_PBE' I think the name is being interpreted as something else. The example that uses LDA_X works fine I mean the test OK olga_: The LDA_K_TF functional depends on the density only - right? Yes, it is the kinetic part. But then one has freedom to add the xc to be LDA or GGA. the parametrized xc helps then to add all terms in only one definition that is treated as the xc I have to leave soon but will read the summary of chat later. Bye! <-- olga_ has quit (Quit: Page closed) OK, see you! Too late! Well, that leaves you and me Marcin! hello well I am here as well Hi Ask Hi So, what's with the maxrss test? Does anyone know what it does? it allocates arrays in a loop and checks if Linux /proc/ reports the right memory increase and apparently it doesn't at the moment Perhaps the process reuses some memory it already has, so it doesn't increase? That test should perhaps always run in a separate process. Unrelated question: is tddft.org down? i moved the maxrss close to beginning of tests - maybe this will help jensj_: It seems to be down for me. Grrr. I need to look at the libxc docs! marcind: Every time you move a test, I will move it back some day by accident. marcind: You should leave comments that communicate all the information necessary for programmers to understand why things are the way they are. if moving the maxrss.py up helps i'll add a comment "don't move" or better yet, "don't move because X" anyway we could also run it in a separate process but let's see how it fares I guess the reason is that it reuses memory and therefore there is no increase, but I didn't understand the failure message. askhl: Do you know if it is possible to use BLACS to redistribute a matrix from part of worls to all of world? worls -> world jensj_: Hmmm. That should be possible, yes. Is there an example somewhere? Let me see. Have you attempted and encountered a failure? I.e., creating a Redistributor with 'world' and the two descriptors and running redistribute() Yes, I could not make it work As long as world is a supercommunicator of both descriptors' layouts', it should work. Can you send a script which doesn't work? Hmm. The example I have is not possible to read for people that are not me that sounds like a challenge, but I am too prudent by now to take it "prudent" is not in my vocabulary? well, wise :) maybe the libxc documentation can be found in the newest package from launchpad I'll try a bit more now that you say that it can be done and get back to you if I fail https://launchpad.net/ubuntu/vivid/+source/libxc Speaking of source code hosting - that reminds me of git Have you guys had a look at gitlab.org ? jensj_: well, it should be possible; it is allowed for the two blacs grids to be different, and that should make it always possible. If the matrices have the same size... :) I haven't had time :( I think the first step is to simply map the entire svn history with all branches to a local git archive. After that we can think about how to host such an archive. The problem right now is what to pass to the Redistributer from those ranks that don't have the initial matrix ... If we do that we should start with ASE It must be called on all CPUs on the supercommunicator (presumably), and then, well, it could be that some of the shape checks are overly aggressive in those cases where a CPU has nothing. because maybe you have a case that we have never used and thus may not currently be permitted But it should be possible to distribute from arbitrary layout to arbitrary layout. OK - you give me hope :-) Anything else we should talk about? hmm I am told there's a talk in two minutes so if we can continue talking for more than two minutes, I am off the hook so uhhhh tddft.org is up again should I make that change to the observers? ahh nice i'm going too, bye askhl: I guess that will break someones observer! okay, bye marcind see you jensj_: Well, have we promised not to break compatibility? :) No :) If you document the change then it's OK by me Okay. We can see if people complain I have run a few lcao-tddft simulations now. I can probably write some documentation soon Great! by "soon" I mean on a geological time-scale it will be practically instantaneous hmm, two minutes have gone and the meeting is still going strong. I guess I won't make it to that talk after all ah well, some other time. By the way, I have run our good old 1415-atom Au cluster on Niflheim with 64 cores on 4 of the new nodes. Converged in 27 iterations. 4 hours of computation time. LCAO/dzp of course As we know, large systems are often more tricky to converge than small ones. Turns out that this is just due to the electronic temperature. All the nasty stuff requiring 100s of iterations seems to be solved easily if one cranks up the smearing to 0.1 instead of 0.01 that we are fond of for clusters. OK, that's good to know askhl: have you seen the new experimental gwap command-line tool? jensj_: I have seen it, but I have not tried it as I don't know the state of it It is one command to run several sub-commands: atom, dataset, diag, dos, info, rpa, run, test, xc If one runs it without arguments it gives an IndexError Instead of having gpaw-test, gpaw-this and gpaw-that Try "gpaw -h" I think the name is quite catchy Are you sure you don't mean gwap -h? Sure - that was what I wrote - there must be some line-noise probably uses UDP so it doesn't guarantee the ordering Could eb. Waht od you think of this idea? Wow, the plots are quite fancy In atom mode, does it run the new setup generator? yes H2 molecule: "ase-build H2 -v2 | gwap run -p mode=lcao" It's quite advanced. It needs an easter egg like the 'recipe' calculation mode in octopus though What is this recipe easter egg? Well, the CalculationMode variable can have different values, like gs (ground state), unocc, td, and others. One of then is recipe in which case the programme prints out a tasty recipe I think some memory optimization is needed though, for larger systems. The distribution is somewhat uneven (talking about the big systems)