Message boards : News : CPU jobs on Linux

Author Message

GDF

Volunteer moderator

Project administrator

Project developer

Project tester

Volunteer developer

Volunteer tester

Project scientist

Send message

Joined: 14 Mar 07

Posts: 1925

Credit: 629,356

RAC: 0

Level



Scientific publications

Joined: 14 Mar 07Posts: 1925Credit: 629,356RAC: 0LevelScientific publications Message 48820 - Posted: 5 Feb 2018 | 19:30:28 UTC

Hi, we need more CPUs on Linux to run QM simulations. Anybody can help?

Do you want to make them more appealing to crunch?? Take the QMML tasks OFF the Boinc CreditNew credit award mechanism and assign them fixed values like you do for the gpu tasks.

Sure, I can help.

Matt Kowal



Send message

Joined: 27 May 14

Posts: 9

Credit: 92,568,568

RAC: 0

Level



Scientific publications

Joined: 27 May 14Posts: 9Credit: 92,568,568RAC: 0LevelScientific publications Message 48824 - Posted: 5 Feb 2018 | 21:18:15 UTC

Last modified: 5 Feb 2018 | 21:23:18 UTC

This forum news post was syndicated to your Twitter account, however, the link is broken.



Relevant post: https://twitter.com/gpugrid/status/960604705171808256



The link resolves to https://www.gpugrid.net/extra_arg_utm_source.html



I have reposted your call to the BOINC subreddit

Do you want to make them more appealing to crunch?? Take the QMML tasks OFF the Boinc CreditNew credit award mechanism and assign them fixed values like you do for the gpu tasks.

+1



PS Unfortunatelly, they crash my computer with my strongest GPU frequently, so I will not run them on this computer.

Unfortunately, I gave up on Linux and run Win 10.





Hi, we need more CPUs on Linux to run QM simulations. Anybody can help?



____________

John

Do you want to make them more appealing to crunch?? Take the QMML tasks OFF the Boinc CreditNew credit award mechanism and assign them fixed values like you do for the gpu tasks.





We are testing this.

I've had good luck with the Linux apps up until the recent gpu application errors that started this month. The cpu tasks ran fine.



I and others have voiced our displeasure with the credit awarded for the flops used for the QM tasks in this thread.

New Student and QMML Project

we are testing different credit systems now.

Toni

Volunteer moderator

Project administrator

Project developer

Project scientist

Send message

Joined: 9 Dec 08

Posts: 958

Credit: 4,353,973

RAC: 0

Level



Scientific publications

Joined: 9 Dec 08Posts: 958Credit: 4,353,973RAC: 0LevelScientific publications Message 48834 - Posted: 6 Feb 2018 | 10:45:39 UTC - in response to Message 48833.

Last modified: 6 Feb 2018 | 10:46:23 UTC

We don't use CreditNew but the previous credit system. In any case, two changes were made yesterday:



* CPU threads are limited to 4 (you should still be able to crunch multiple WUs at once, please check)

* Credits should be in line with other projects'



Let us know.



* CPU threads are limited to 4 (you should still be able to crunch multiple WUs at once, please check)



Let us know.



Boinc is still assigning all of my 32 threads to one task even though it is only using 4.

Hi, we need more CPUs on Linux to run QM simulations. Anybody can help?

I have three machines on it, but I can run only one work unit at a time on average. That is because when any more start up at once, they error out, as has been discussed before. And I run two cores per work unit for efficiency. But if you could solve the start-up problem, I could run more.

I don't see that you have made an announcement on the BOINC forum yet. The Projects section would probably be best.

http://boinc.berkeley.edu/dev/forum_forum.php?id=11



I have an Intel 2600K with 8 logical cores. I wanted to reserved 2 cores for feeding 2 GPUs that are running Folding@home. I set the computing preference in boinc to use, at most, 80% of processors (6 cores).



Data on 19 WUs before GPUGrid changes (processor usage varied but 5-6 cores on average I think)



Average run time (sec): 3,129.23

Average CPU time (sec): 16,122.19

Average credit per WU: 228.7

Average WU per day: 27.6

PPD: 6314.8



Data on 15 WUs after GPUGrid changes. Processor usage was 4 cores even though set at 6)



Average run time (sec): 3,566.32

Average CPU time (sec): 14,114.28

Average credit per WU: 819.284

Average WU per day: 24.2

PPD: 19848.5



Summary: Processor usage seems to be maxed out at 4. Run time has increased, CPU time has decreased, WU completed per day has decreased and PPD has increased significantly.





Toni

Volunteer moderator

Project administrator

Project developer

Project scientist

Send message

Joined: 9 Dec 08

Posts: 958

Credit: 4,353,973

RAC: 0

Level



Scientific publications

Joined: 9 Dec 08Posts: 958Credit: 4,353,973RAC: 0LevelScientific publications Message 48840 - Posted: 6 Feb 2018 | 14:57:09 UTC - in response to Message 48836.

Last modified: 6 Feb 2018 | 15:09:56 UTC





Boinc is still assigning all of my 32 threads to one task even though it is only using 4.



Ouch. This should not happen. May be fixed now.

Hi, we need more CPUs on Linux to run QM simulations. Anybody can help?

I have three machines on it, but I can run only one work unit at a time on average. That is because when any more start up at once, they error out, as has been discussed before. And I run two cores per work unit for efficiency. But if you could solve the start-up problem, I could run more.



I agree. There were too many issues that had not been resolved. On top of that the credit much worse than even CreditNew. I see the credit has been changed just saying there are reasons for the lack of CPU time.

Are you sure that you are using an "older" credit mechanism? From the response in the "New Student and QMML" thread from Richard Haselgrove who corrected me in my assumption you might be using an "older" mechanism.



Be careful of your terms. 'CreditNew' has been the default BOINC mechanism since 2010. I suspect this is what GPUGrid is using for these tasks: the support mechanisms for 'even older credit' have been removed from the codebase.

I wonder where you found the older codebase that has been removed that contained the "older" credit award algorithm. If you do in fact have such, I would like access to it. Or have it reinstituted into the BOINC Github codebase.



It would be helpful in persuading the BOINC maintainers that there is in fact a way to return to the older credit algorithm. One of their stated reasons why they said they would not change from CreditNew is that they said they no longer have the original code and can't replicate it.



That said, it looks like the award for QC cpu tasks is much more appealing now.

Hmmm ... I just ran a new QC task with the supposed new credit. Not seeing any difference.



Run time 3,292.22

CPU time 12,925.47

Validate state Valid

Credit 110.63



I used 4 cores to generate 110 credits for 54 minutes of compute time.



I can use one core to generate 108 credits for 60 minutes of compute time for SETI CPU tasks.



No reason to run these tasks still for me.

Since just after mid day yesterday (UTC) I have noticed an increase in credit awarded for the QC WU's. Doing some quick calcs, it appears the increase is about 4.5x of what we were getting. It also appears they are more fixed in value proportional to the size of the WU. My faster machines are getting over 500 credits/hr (4 cores) compute time whereas the slower machines are getting proportionately less/hr and still getting equivalent credit but over a longer period of time than the faster ones.



Well, at least my avg credit per day will not be taking as much of a hit per day as it has been without Linux GPU WU's :).



So far, I have 5 machines with 4 cores each running the QC project and will add one more when I install the memory upgrade on one of the headless systems.

I'm not sure the higher credit is not due to the larger molecule size in the latest tasks that Dominik explained to me here.

No higher credits for the tasks I've crunched yesterday and today.



I stop and will try to understand what is happening.

I've also been having trouble with work units erroring out in this way. I have over 150 cpu cores spread out across a variety of machines, all under linux. Just a few moments ago I attempted to attach to GPUGRID only to have computational error after computational error. I hope this is fixed shortly as I would love to have GPUGRID as one of my default projects.

Yes, credit is doing something very strange. I got 111 credits for 3292 seconds of cpu time.



klepel got 1362 credits for the same time.



Run time 3,262.28

CPU time 12,834.34

Validate state Valid

Credit 1,361.79



Task 13118994



I used 4 cores to generate 110 credits for 54 minutes of compute time.



I can use one core to generate 108 credits for 60 minutes of compute time for SETI CPU tasks.



No reason to run these tasks still for me.



Are you seriously comparing SETI to GPUGRID?!



I used 4 cores to generate 110 credits for 54 minutes of compute time.



I can use one core to generate 108 credits for 60 minutes of compute time for SETI CPU tasks.



No reason to run these tasks still for me.



Are you seriously comparing SETI to GPUGRID?!



CreditNew is CreditNew.



I used 4 cores to generate 110 credits for 54 minutes of compute time.



I can use one core to generate 108 credits for 60 minutes of compute time for SETI CPU tasks.



No reason to run these tasks still for me.



Are you seriously comparing SETI to GPUGRID?!



CreditNew is CreditNew.

You cannot compare one project's credit to another.

You cannot compare one project's credit to another.

At least it should be comparable as described in the BOINC documentation.

See: http://boinc.berkeley.edu/trac/wiki/CreditNew#Cross-projectversionnormalization





I used 4 cores to generate 110 credits for 54 minutes of compute time.



I can use one core to generate 108 credits for 60 minutes of compute time for SETI CPU tasks.



No reason to run these tasks still for me.



Are you seriously comparing SETI to GPUGRID?!



CreditNew is CreditNew.

You cannot compare one project's credit to another.

One of the stated objectives of CreditNew it to make credit the same across all projects for the same amount of cobblestones used to compute.

Thus my comment about CreditNew. CPU projects that have higher or lower then typical RAC are most likely using something else besides CreditNew like a fixed credit or another algorithm.

Yes I understood your post and sentiment. My post was directed at the other poster's incredulous comment.



This project itself utilize both mechanisms. CreditNew for cpu tasks and fixed credit awards for gpu tasks.



As far as I have been able to find, that is unique among projects. Usually it is either/or not both.

Yes I understood your post and sentiment. My post was directed at the other poster's incredulous comment.



This project itself utilize both mechanisms. CreditNew for cpu tasks and fixed credit awards for gpu tasks.



As far as I have been able to find, that is unique among projects. Usually it is either/or not both.



I wasn't referencing you as I didn't quote you. ;)



I agree.

Would you please list the QC project progress on the server status page as well:



http://www.gpugrid.net/server_status.php



Thanks.



Another issue is that your app does not dynamically allocate CPU cores according to the BOINC settings. Instead it claims all physically present cores. That is a major problem when trying to run computations on the GPU as well because to do so, the GPU project automatically (or I manually) reserve(s) one CPU core per GPU task.

Example: When the BOINC manager is set to use 7 of the 8 cores to do CPU computations, your CPU client grabs all 8 cores (or since recently 2x 4 cores).

That is not acceptable.

Please fix this to attract more people to donate CPU cycles to your project.



Michael.

____________

President of Rechenkraft.net - Germany's first and largest distributed computing organization.

Another issue is that your app does not dynamically allocate CPU cores according to the BOINC settings. Instead it claims all physically present cores.



I'm seeing this behavior as well. On my Ryzen 1700X system with 2 GPUs, these WUs basically take over all CPUs and throw the GPUs into "Waiting to run".



I suppose one could set max_concurrent in an app_config.xml to fix this...what would be the proper app name to use?

QC is the proper app name. This is how I limit QC to 2 threads per task.



<app>

<name>QC</name>

<max_concurrent>1</max_concurrent>

</app>

<app_version>

<app_name>QC</app_name>

<plan_class>mt</plan_class>

<avg_ncpus>2.000000</avg_ncpus>

<cmdline>--nthreads 2</cmdline>





I also just run 1 task at a time to avoid the starting two tasks at the same time flaw in the application.

[VENETO] boboviz

Send message

Joined: 10 Sep 10

Posts: 142

Credit: 388,132

RAC: 0

Level



Scientific publications

Joined: 10 Sep 10Posts: 142Credit: 388,132RAC: 0LevelScientific publications Message 48988 - Posted: 19 Feb 2018 | 8:24:05 UTC

An error on a wu (task 17040682):



<core_client_version>7.8.3</core_client_version>

<![CDATA[

<message>

process exited with code 195 (0xc3, -61)</message>

<stderr_txt>

An HTTP error occurred when trying to retrieve this URL.

HTTP errors are often intermittent, and a simple retry will get you on your way.

ConnectionError(MaxRetryError("HTTPSConnectionPool(host='repo.continuum.io', port=443): Max retries exceeded with url: /pkgs/main/linux-64/repodata.json.bz2 (Caused by NewConnectionError('<urllib3.connection.VerifiedHTTPSConnection object at 0x7fc2a3f56860>: Failed to establish a new connection: [Errno -3] Temporary failure in name resolution',))",),)





Traceback (most recent call last):

File "pre_script.py", line 13, in <module>

raise Exception("Error installing h5py")

Exception: Error installing h5py

08:49:09 (1252): $PROJECT_DIR/miniconda/bin/python exited; CPU time 0.469942

08:49:09 (1252): app exit status: 0x1

08:49:09 (1252): called boinc_finish(195)



Hi, we need more CPUs on Linux to run QM simulations. Anybody can help?



okay, I'm going to install Linux right now on my computer and it should be ready tonight or early tomorrow.

____________

Cruncher/Learner in progress.



Traceback (most recent call last):

File "pre_script.py", line 13, in <module>

raise Exception("Error installing h5py")

Exception: Error installing h5py

08:49:09 (1252): $PROJECT_DIR/miniconda/bin/python exited; CPU time 0.469942

08:49:09 (1252): app exit status: 0x1

08:49:09 (1252): called boinc_finish(195)





If you are running the latest distros from Ubuntu or Mint you may need to install the python-support package.



wget http://launchpadlibrarian.net/109052632/python-support_1.0.15_all.deb



sudo dpkg -i python-support_1.0.15_all.deb



One of my CPUs is crunching linux jobs, I will add two more next week.

I have added three. Athlon 5350, I3 3240, I3 6100. Not the best ones but doing their job. I can add more but I would like to know how many WUs can we expect for the Linux QM app?



BTW ATM just 92 users are crunching QM, that's sad :(

I have added another machine, and now have four on it (2 i7-3770, 1 i7-4790 and 1 Ryzen 1700), with two to four cores per machine allocated via the resource share. The main problem in running them is that when two or more start up at the same time, they error out. That happens mainly during reboots, but otherwise they never start up at the same time. I leave my machines running 24/7, so I don't reboot very often.



And to minimize the problem, you can run with the default 4 cores per work unit and only 4 cores (or less) per machine on average, so that they usually don't start more than one work unit at a time anyway. In that way, it is a manageable problem for me, though it would be best if they fix it. I am sure more people would then be willing to run it.



Also a Windows version would help of course, and for that they had better be seriously thinking about VirtualBox.

Jim 1348 said: Also a Windows version would help of course, and for that they had better be seriously thinking about VirtualBox.



If you want to go the virtualbox route, you can create your own virtualbox instance, install your favorite flavor of Linux (I chose Ubuntu), make sure that gcc is installed, install BOINC and start running the Linux version of the Quantum Chemistry tasks.



It took me a few tries to figure out that gcc needed to be installed, but now they seem to be running fine.

I have added three. Athlon 5350, I3 3240, I3 6100. Not the best ones but doing their job. I can add more but I would like to know how many WUs can we expect for the Linux QM app?



BTW ATM just 92 users are crunching QM, that's sad :(



People aren't running it since it has so many issues that haven't been addressed.

Lucky me (not even a single problem on my end)

The main reason why I came here was to put my GPU's to work, my CPU cores are all very busy with CPDN.

Lucky me (not even a single problem on my end)



Start two at once..

I understand that. I'm using only 4C or 2c/4t CPUs so I can't start two at once. Probably that's why i don't have any issues :)

Jim 1348 said: Also a Windows version would help of course, and for that they had better be seriously thinking about VirtualBox.



If you want to go the virtualbox route, you can create your own virtualbox instance, install your favorite flavor of Linux (I chose Ubuntu), make sure that gcc is installed, install BOINC and start running the Linux version of the Quantum Chemistry tasks.



Thanks, but I have five Ubuntu machines. I was offering that only as advice if they want to increase their processing power. They need to enlist the Windows users.



The linux QM app runs fine but it requires some packages to be installed. People should expect infinite number of workunits...



We will run this for ever.





The linux QM app runs fine but it requires some packages to be installed. People should expect infinite number of workunits...



We will run this for ever.







Wow, wow, wow! :D

I will add some more CPUs.



When can we expect some info about scientific results created with this app?



When can we expect some info about scientific results created with this app?



Months away, we have just started.



Could someone who knows how, make a description of exactly which dependencies are needed to make the QC app run flawlessly, in the FAQ section? I think it is a good place for it to be described.



I understand that. I'm using only 4C or 2c/4t CPUs so I can't start two at once. Probably that's why i don't have any issues :)



Once those issues have been solved I will gladly set a 16/32t Threadripper on the QC tasks to shovel them away ;)

____________

I would love to see HCF1 protein folding and interaction simulations to help my little boy... someday.

And to minimize the problem, you can run with the default 4 cores per work unit and only 4 cores (or less) per machine on average, so that they usually don't start more than one work unit at a time anyway. In that way, it is a manageable problem for me, though it would be best if they fix it. I am sure more people would then be willing to run it.



I have set up a i3 Sandy Bridge machine (2c/4t) on Ubuntu and use the default config as suggested above, but all tasks report a calculation error after just 1-3 minutes. Any idea what goes wrong here?



http://www.gpugrid.net/result.php?resultid=17336453

____________

I would love to see HCF1 protein folding and interaction simulations to help my little boy... someday.

JoergF



Try installing gcc and see if that helps.



In a terminal session sudo apt-get install gcc

JoergF



Try installing gcc and see if that helps.



In a terminal session sudo apt-get install gcc



Thank you... I have installed it. However I am not able to test it as there seems to be a daily quota of 2 tasks per computer and I have to wait for the next day. Really, in view of of >10.000 unsent QC tasks, that limitation is somewhat surprising.

____________

I would love to see HCF1 protein folding and interaction simulations to help my little boy... someday.

JoergF said:

However I am not able to test it as there seems to be a daily quota of 2 tasks per computer and I have to wait for the next day



There is a daily limit if a cruncher starts turning more than the usual number of errors. Once you start turning in valid tasks, the daily limit will go back up. I have been caught in it a few times when running test tasks.



Hopefully you will get better results tomorrow.

Thank you.. seems to work now :)

____________

I would love to see HCF1 protein folding and interaction simulations to help my little boy... someday.

[VENETO] boboviz

Send message

Joined: 10 Sep 10

Posts: 142

Credit: 388,132

RAC: 0

Level



Scientific publications

Joined: 10 Sep 10Posts: 142Credit: 388,132RAC: 0LevelScientific publications Message 49211 - Posted: 26 Mar 2018 | 19:13:52 UTC - in response to Message 49179.

People should expect infinite number of workunits...

We will run this for ever.



Oh, well.

So we dream for a opencl client (or an sse/avx cpu optimization)

Oh, well.

So we dream for a opencl client (or an sse/avx cpu optimization)

As you probably know, the FMA version of TN-Grid is faster than the AVX version. At least that was my result comparing a Ryzen 1700 (FMA) to an i7-4770 (AVX), both on Ubuntu.



[VENETO] boboviz

Send message

Joined: 10 Sep 10

Posts: 142

Credit: 388,132

RAC: 0

Level



Scientific publications

Joined: 10 Sep 10Posts: 142Credit: 388,132RAC: 0LevelScientific publications Message 49213 - Posted: 27 Mar 2018 | 8:20:51 UTC - in response to Message 49212.

As you probably know, the FMA version of TN-Grid is faster than the AVX version. At least that was my result comparing a Ryzen 1700 (FMA) to an i7-4770 (AVX), both on Ubuntu.





I know.

And, i forgot, a Windows app!!! :-)



The linux QM app runs fine but it requires some packages to be installed. People should expect infinite number of workunits...



We will run this for ever.

I still think, the main obstacle to gain a wider LINUX contributor base is to solve the start-up error when two WUs start at the same time. In my view, this hinders the implementation of the app on newer computers with more than 4 threads/cores, as the other available computer threads/cores are not loaded by other BOINC projects.



The linux QM app runs fine but it requires some packages to be installed. People should expect infinite number of workunits...



We will run this for ever.

I still think, the main obstacle to gain a wider LINUX contributor base is to solve the start-up error when two WUs start at the same time. In my view, this hinders the implementation of the app on newer computers with more than 4 threads/cores, as the other available computer threads/cores are not loaded by other BOINC projects.





True. I'm using only 2c/4t and 4c CPUs because of that. Ryzen 8c/16t is crunching WCG.

You can sort of learn to live with its idiosyncrasies after a while. They are more an annoyance at first. But I wonder if a non multi-core version would fix the start-up problems? It would be worth looking into.

May I touch upon this matter again after 2,5-3 weeks... at the risk of being a pain in the neck... is there any progress regarding concurrent QC tasks? I would really like to use a 8-16 core CPU (instead of my i3) and run several jobs at the same time. Hope that the admins can take some time to follow up, in view of the many jobs we still have to crunch.



Thanks.

____________

I would love to see HCF1 protein folding and interaction simulations to help my little boy... someday.

Hey,



What are the recommended PC specs you recommend for these Linux job units? (minimal CPU and ram?) Thanks

____________

Cruncher/Learner in progress.

I have five QC running at the moment on two machines, and they are taking from 230 MB to 260 MB. I would plan on at least 300 MB to be safe.

I now have one running at 1016 MB for a few minutes, but now down to 932 MB. It looks like the upper limit is a bit elastic.

I now have one running at 1016 MB for a few minutes, but now down to 932 MB. It looks like the upper limit is a bit elastic.

Just to confirm, that's 1GB per work unit?

I have five QC running at the moment on two machines, and they are taking from 230 MB to 260 MB. I would plan on at least 300 MB to be safe.



pardon me for jumping in, does that mean it is possible now to run several QC tasks at the same time? For example, 4 Jobs on the Ryzen 1700?

____________

I would love to see HCF1 protein folding and interaction simulations to help my little boy... someday.

Toni

Volunteer moderator

Project administrator

Project developer

Project scientist

Send message

Joined: 9 Dec 08

Posts: 958

Credit: 4,353,973

RAC: 0

Level



Scientific publications

Joined: 9 Dec 08Posts: 958Credit: 4,353,973RAC: 0LevelScientific publications Message 49311 - Posted: 18 Apr 2018 | 11:32:10 UTC - in response to Message 49310.

Last modified: 18 Apr 2018 | 11:33:45 UTC

It is possible to run as many as you want concurrently. A bug unfortunately prevents simultaneous starts. I am investigating possible workarounds but no timeline yet, sorry. Requirements are in some other thread; they are rather mild (only thing you need to have the gcc package installed).

I have five QC running at the moment on two machines, and they are taking from 230 MB to 260 MB. I would plan on at least 300 MB to be safe.



pardon me for jumping in, does that mean it is possible now to run several QC tasks at the same time? For example, 4 Jobs on the Ryzen 1700?



Best to just run 1 per computer via app_config or you'll eventually get two starting at once. They'll crash at once, and end up in this loop.

I now have one running at 1016 MB for a few minutes, but now down to 932 MB. It looks like the upper limit is a bit elastic.

Just to confirm, that's 1GB per work unit?

Yes, but now I see 1466 MB for a single work unit, the most I have seen.



Note however that I use an app_config to limit them to only one CPU core per work unit, but I do not limit the number of work units that can run at a time. It would undoubtedly be more efficient use of memory to allow the default value of four cores to run on a single work unit. It probably would not use much more memory than a single core, but what the maximum is I don't really know. I have 32 GB, so I have never really paid much attention to it. I use BOINCTasks to measure the memory use by the way.

It is possible to run as many as you want concurrently. A bug unfortunately prevents simultaneous starts.



Okay, thank you :)

____________

I would love to see HCF1 protein folding and interaction simulations to help my little boy... someday.

Requirements are in some other thread; they are rather mild (only thing you need to have the gcc package installed).



I have looked around and cannot find any requirements for this specific project. The only CPU requirements I see are here

http://www.gpugrid.net/join.php



But that looks like general use to me.



I will make my question a little more simple... would a core i3 work fine with this project, and if so, would a gen 1 be fine, gen 2, or newer? My budget for another PC dedicated to cpu work is about 100 USD. I do not mind for a small desktop either without a big gpu slot. Thank you.

____________

Cruncher/Learner in progress.



I will make my question a little more simple... would a core i3 work fine with this project, and if so, would a gen 1 be fine, gen 2, or newer? My budget for another PC dedicated to cpu work is about 100 USD. I do not mind for a small desktop either without a big gpu slot. Thank you.



For questions like this, wuprop is a fantastic resource. Check the link and you can see things like time to complete for various CPUs and other requirements like memory. In this case looks like you may need 776.5 MB of RAM and an i3 should do just fine.



I will make my question a little more simple... would a core i3 work fine with this project, and if so, would a gen 1 be fine, gen 2, or newer? My budget for another PC dedicated to cpu work is about 100 USD. I do not mind for a small desktop either without a big gpu slot. Thank you.



For questions like this, wuprop is a fantastic resource. Check the link and you can see things like time to complete for various CPUs and other requirements like memory. In this case looks like you may need 776.5 MB of RAM and an i3 should do just fine.





Thank you very much.

____________

Cruncher/Learner in progress.

It is possible to run as many as you want concurrently. A bug unfortunately prevents simultaneous starts. I am investigating possible workarounds but no timeline yet, sorry.



One simple solution would be creating a TEMP file in the application directory. The first thing a job does is trying to create this file. In case the create command fails (because the file is already there or a multiple create/write conflict occurred) the job must back off for a while. If successful, this job is allowed to start and the others will try again in a couple of seconds. The now starting job shall delete the TEMP file in a timely manner, so that the others can also get started. So the TEMP file actually works like a (do-not-start) flag. You may also stop all tasks sometime and delete the TEMP at regular intervals, to make sure there is no failing or cancelled task leaving the file in place forever.



Frankly there are also other and more professional things like semaphores available in the OS, but the above is possibly the fastest solution.

____________

I would love to see HCF1 protein folding and interaction simulations to help my little boy... someday.

...You may also stop all tasks



Edit: I mean to pause the tasks, not to cancel them of course. By the way, does the error show up also when multiple tasks are paused and then re-started/continued at the same time?

____________

I would love to see HCF1 protein folding and interaction simulations to help my little boy... someday.

I am having a lot of CPU Quantum Chemistry WUs error out as soon as they start on my Ubuntu machine. Some work but most error out. Is this inherent with the CPU app or is it my machine?



Below are the machine's tasks:

http://www.gpugrid.net/results.php?hostid=424454

Me too:



http://www.gpugrid.net/result.php?resultid=17510462





(SORRY, double post)

I am having a lot of CPU Quantum Chemistry WUs error out as soon as they start on my Ubuntu machine. Some work but most error out. Is this inherent with the CPU app or is it my machine? There's a bug in the app, which prevents more than 1 task starting simultaneously. When the task started successfully, you can start another task manually. Since there's no automated way for this, you should pause all but one task before you shut down your computer, then start them one by one. The other option is to limit the concurrently running QC apps to 1. Since this app uses only 4 threads (cores) you should utilize your other CPU cores with a different project.

To do this you should create / modify your app_config.xml file in the projects\www.gpugrid.net folder.



<app_config> <app> <name>QC</name> <max_concurrent>1</max_concurrent> </app> <app_version> <app_name>QC</app_name> <plan_class>mt</plan_class> <avg_ncpus>4</avg_ncpus> </app_version> </app_config>

So I understand why I receive "error while computing" with almost 0 seconds of compute time (multiple WUs starting at once) but I don't understand the fairly high number of failed WUs with multiple thousand second or even over ten thousand second run times. Linked below are my error rates:



http://www.gpugrid.net/results.php?hostid=424454&offset=0&show_names=0&state=5&appid=



Does anyone think they can explain why the non WUs starting at once errors are occurring?

Did you stop the computer at any point, and let them all RE-start in unison - from whatever point they'd reached?

Did you stop the computer at any point, and let them all RE-start in unison - from whatever point they'd reached?

From my knowledge, the computer only turned off once last week. It is on 24/7 otherwise. It looks like these errors are all over the place in terms of date.

We are losing CPU Volunteers, can anyone help?

We are losing CPU Volunteers, can anyone help?





It's up to the scientists to sort. They will need a Windowa version to have any hope of getting a reasonable throughput.

____________

Radio Caroline, the world's most famous offshore pirate radio station.

Great music since April 1964. Support Radio Caroline Team -

Radio Caroline

We are losing CPU Volunteers, can anyone help?





It's up to the scientists to sort. They will need a Windowa version to have any hope of getting a reasonable throughput.



Or just linux app without 'starting many at once' issue. I could add more than 100 cores but I cant because of that.

Stefan

Volunteer moderator

Project developer

Project scientist

Send message

Joined: 5 Mar 13

Posts: 348

Credit: 0

RAC: 0

Level



Scientific publications

Joined: 5 Mar 13Posts: 348Credit: 0RAC: 0LevelScientific publications Message 49417 - Posted: 9 May 2018 | 9:49:02 UTC

We will talk this week with the devs of the QM CPU software to see how we can make a Windows build. Once we know the status and how much work it is we will update. But believe me we are extremely keen on a Windows version...

tullio

Send message

Joined: 8 May 18

Posts: 167

Credit: 70,146,407

RAC: 220,768

Level



Scientific publications

Joined: 8 May 18Posts: 167Credit: 70,146,407RAC: 220,768LevelScientific publications Message 49419 - Posted: 9 May 2018 | 11:19:36 UTC

Since you are using the wrapper from LHC@home why don't you use VirtualBox? This would eliminate the need for a Windows version. All Windows user of LHC@home can run LHC@home tasks, written in Scientific Linux, using Virtual Box.

Tullio

changed my desktop settings to include cpu tasks again. (im in the pool so it wont show on my account). hope my contribution, small as it is, helps!

[VENETO] boboviz

Send message

Joined: 10 Sep 10

Posts: 142

Credit: 388,132

RAC: 0

Level



Scientific publications

Joined: 10 Sep 10Posts: 142Credit: 388,132RAC: 0LevelScientific publications Message 49427 - Posted: 10 May 2018 | 7:24:21 UTC - in response to Message 49419.

Since you are using the wrapper from LHC@home why don't you use VirtualBox?



No, please.

Now i'm using a Linux vm with Virtual Box for this project and it's a nightmare.

Since you are using the wrapper from LHC@home why don't you use VirtualBox?



No, please.

Now i'm using a Linux vm with Virtual Box for this project and it's a nightmare.

The LHC@home performance using Virtualbox vs direct from linux is a ridiculous loss in efficiency and performance, not to mention substantial ram usage. If we can avoid virtualbox outright I would consider that a win.

The LHC@home performance using Virtualbox vs direct from linux is a ridiculous loss in efficiency and performance, not to mention substantial ram usage. If we can avoid virtualbox outright I would consider that a win.

You keep saying that, which I think just shows that you have had problems with VirtualBox. Even if so, I don't know why that is a reason for GPUGrid not to use it. A lot of people can get it to work.



It is not a "ridiculous loss in efficiency", but about the same as using Windows on some projects optimized for Linux (I do both VirtualBox ATLAS and native ATLAS on LHC, and have a basis for comparison).



And the ram usage depends mainly on the project. It is reasonable enough (about 2 GB) when running 4 cores on Cosmology. If that is too much for you, then upgrade your hardware or stop hassling other people that do have it.



Stefan

Volunteer moderator

Project developer

Project scientist

Send message

Joined: 5 Mar 13

Posts: 348

Credit: 0

RAC: 0

Level



Scientific publications

Joined: 5 Mar 13Posts: 348Credit: 0RAC: 0LevelScientific publications Message 49481 - Posted: 15 May 2018 | 15:50:04 UTC

We talked with the devs. The idea is to collaborate over summer for a Windows version. So I hope by the end of summer we should have a new release.

[VENETO] boboviz

Send message

Joined: 10 Sep 10

Posts: 142

Credit: 388,132

RAC: 0

Level



Scientific publications

Joined: 10 Sep 10Posts: 142Credit: 388,132RAC: 0LevelScientific publications Message 49483 - Posted: 15 May 2018 | 16:12:02 UTC - in response to Message 49481.

We talked with the devs. The idea is to collaborate over summer for a Windows version. So I hope by the end of summer we should have a new release.



Great news!!

Meantime, please fix problems with linux

tullio

Send message

Joined: 8 May 18

Posts: 167

Credit: 70,146,407

RAC: 220,768

Level



Scientific publications

Joined: 8 May 18Posts: 167Credit: 70,146,407RAC: 220,768LevelScientific publications Message 49493 - Posted: 17 May 2018 | 15:36:15 UTC

All CPU tasks fail on my SuSE Linux Leap 42.3. All GPU tasks complete and validate on my GTX 750 Ti giving me huge credits, more than Einstein@home and SETI@home GPU tasks.

Tullio

All CPU tasks fail on my SuSE Linux Leap 42.3. All GPU tasks complete and validate on my GTX 750 Ti giving me huge credits, more than Einstein@home and SETI@home GPU tasks.

Tullio

Hi, Credits cannot be compared from project to project. And even though GPU tasks give so much credit here at GPUGrid there is still substantial scientific weight and benefit to GPUGrid's CPU Work Units even though they don't give as much credit. Please keep this in mind.

All CPU tasks fail on my SuSE Linux Leap 42.3. All GPU tasks complete and validate on my GTX 750 Ti giving me huge credits, more than Einstein@home and SETI@home GPU tasks.

Tullio

Hi, Credits cannot be compared from project to project. And even though GPU tasks give so much credit here at GPUGrid there is still substantial scientific weight and benefit to GPUGrid's CPU Work Units even though they don't give as much credit. Please keep this in mind.

Since BOINC credits are defined as a certain number of 'cobblestones' (completed floating point operations), they should be comparable. But I agree, they're not.



That somewhat arcane point becomes more significant when statistics sites, and BOINC itself, back-calculate the floating point performance of a project from the number of credits awarded.

tullio

Send message

Joined: 8 May 18

Posts: 167

Credit: 70,146,407

RAC: 220,768

Level



Scientific publications

Joined: 8 May 18Posts: 167Credit: 70,146,407RAC: 220,768LevelScientific publications Message 49499 - Posted: 18 May 2018 | 3:58:40 UTC - in response to Message 49494.

Last modified: 18 May 2018 | 3:59:05 UTC



Hi, Credits cannot be compared from project to project. And even though GPU tasks give so much credit here at GPUGrid there is still substantial scientific weight and benefit to GPUGrid's CPU Work Units even though they don't give as much credit. Please keep this in mind.

I don't give a damn about credits. But all my CPU tasks fail miserably.

Tullio

We are losing CPU Volunteers, can anyone help?



If admins want CPU support they'll make an app w/o bugs. No effort by them, no effort from us. They get paid, we pay for the PCs and we'd rather not have it aimlessly wasted.

[VENETO] boboviz

Send message

Joined: 10 Sep 10

Posts: 142

Credit: 388,132

RAC: 0

Level



Scientific publications

Joined: 10 Sep 10Posts: 142Credit: 388,132RAC: 0LevelScientific publications Message 49514 - Posted: 21 May 2018 | 12:17:04 UTC - in response to Message 49504.

We are losing CPU Volunteers, can anyone help?



If admins want CPU support they'll make an app w/o bugs. No effort by them, no effort from us.



+1

I would put up with the bugs if I could. But I get only crashes on QC now, so I am out of business.



I hope they make a big announcement when it is fixed, since I won't know about it otherwise. Maybe they are spending their time on the Windows version?

tullio

Send message

Joined: 8 May 18

Posts: 167

Credit: 70,146,407

RAC: 220,768

Level



Scientific publications

Joined: 8 May 18Posts: 167Credit: 70,146,407RAC: 220,768LevelScientific publications Message 49516 - Posted: 21 May 2018 | 14:03:43 UTC

Maybe they are more interested to the results of the CRUNCHATHLON competition. You can not compete with a dead horse.

Tullio

You can not compete with a dead horse.

Tullio

Not at the Palio either.



Please see the Number Crunching forum.

Well, I gave QM a try, but I really didn't like it, for a few reasons.



First, the setup doesn't work when running more than one WU at a time. You need a "lock file" to ensure only one thread does the setup. If that thread gets aborted you might have problems, but it's better than the current situation.



Second, you know what I hate? I hate BOINC apps that download more data themselves. (Such as LHC@Home Atlas.) All necessary data should be coming from your BOINC server. At least your app only seems to do that downloading once.



Third, you know what else I hate? Bloated apps. Your "miniconda" includes such things as TK (not needed unless there's a screen saver?) and man pages. Maybe if you streamlined it you could fit whatever extra data it downloads in the initial download, and then you wouldn't need the networking libraries either.



But, I finally did get QM working. What made me give up on it was the credit. It started around 600 credits/WU, but it seemed to get cut in half every few sets of WUs. It was down around 50 when I gave up.

But, I finally did get QM working. What made me give up on it was the credit. It started around 600 credits/WU, but it seemed to get cut in half every few sets of WUs. It was down around 50 when I gave up.

I don't care about credits themselves, but I do wonder about the science that is being accomplished. If it is the same per work unit, that is OK, but if it is being cut in half periodically also, then that is a problem. I wonder what causes it?



But, I finally did get QM working. What made me give up on it was the credit. It started around 600 credits/WU, but it seemed to get cut in half every few sets of WUs. It was down around 50 when I gave up.

The credits are down to around 50 right now because the Work Units are extremely short due to testing.



I'm not sure what everyone's fantasy is with credit. I personally couldn't care less as long as what I am doing is benefiting science. The credit itself is worth nothing so I'm not sure I get the point. I guess some people just need something more.

Stefan

Volunteer moderator

Project developer

Project scientist

Send message

Joined: 5 Mar 13

Posts: 348

Credit: 0

RAC: 0

Level



Scientific publications

Joined: 5 Mar 13Posts: 348Credit: 0RAC: 0LevelScientific publications Message 49581 - Posted: 1 Jun 2018 | 14:11:22 UTC

Last modified: 1 Jun 2018 | 14:44:33 UTC

Indeed as papalito said these WUs are muuuuuch shorter than the 600 credits ones. They scale by computation time automatically. I miscalculated a bit yesterday how fast they are so we ran out but I'm going to submit more in an hour or so.



The downloading stuff I understand but we depend unfortunately on external software (psi4) so we cannot control everything. As you said it only downloads once. But yes conda is bloated in general but we kept it down to the bare minimum packages.



@Jim the WUs that I send contain varying molecules of different number of atoms. The ones I sent yesterday had very few atoms so they completed super fast. But "every molecule is sacred" as per Monty Python. I guess I could mix all molecules together but it would become an organizational chaos for me to keep track of what I have already calculated, so for now I will continue submitting them with increasing molecule size.

Stefan, Can you reduce free disk-space requirement of 4768.37 MB to something like 4000MB or less? Your QM WUs do not fit on my USB 16 GB Stick anymore! (After Lubuntu Up-Grade from 17.10 to 18.04)

Stefan

Volunteer moderator

Project developer

Project scientist

Send message

Joined: 5 Mar 13

Posts: 348

Credit: 0

RAC: 0

Level



Scientific publications

Joined: 5 Mar 13Posts: 348Credit: 0RAC: 0LevelScientific publications Message 49583 - Posted: 1 Jun 2018 | 15:13:14 UTC - in response to Message 49582.

Sorry klepel, I don't think I can :( Most of it is taken up by miniconda and the required software, not the workunits themselves.

@Jim the WUs that I send contain varying molecules of different number of atoms. The ones I sent yesterday had very few atoms so they completed super fast. But "every molecule is sacred" as per Monty Python. I guess I could mix all molecules together but it would become an organizational chaos for me to keep track of what I have already calculated, so for now I will continue submitting them with increasing molecule size.

No problem at all. Keep doing what you have to do.



@Stefan can you quickly describe for us what the Quantum Chemistry Work Units are doing and what you are learning from these work units?

[VENETO] boboviz

Send message

Joined: 10 Sep 10

Posts: 142

Credit: 388,132

RAC: 0

Level



Scientific publications

Joined: 10 Sep 10Posts: 142Credit: 388,132RAC: 0LevelScientific publications Message 49587 - Posted: 3 Jun 2018 | 9:30:46 UTC - in response to Message 49193.



When can we expect some info about scientific results created with this app?



Months away, we have just started.







Some months have gone.

We are crunching for....? Cancer research? Any preliminary result?

Do you plan, togheter with windows app, also a gpu version?

[VENETO] boboviz

Send message

Joined: 10 Sep 10

Posts: 142

Credit: 388,132

RAC: 0

Level



Scientific publications

Joined: 10 Sep 10Posts: 142Credit: 388,132RAC: 0LevelScientific publications Message 49588 - Posted: 3 Jun 2018 | 9:31:53 UTC - in response to Message 49585.

@Stefan can you quickly describe for us what the Quantum Chemistry Work Units are doing and what you are learning from these work units?



+1

+ another 1. I have 7+7+3 cores 100% dedicated to QC and 4+4+4 cores 50% dedicated to QC and WCG, all 24/7. Along with 3 GTX 1060's, makes for a very warm office, especially now in summer. Would be interesting to find out what the project goals are :).

@Stefan can you quickly describe for us what the Quantum Chemistry Work Units are doing and what you are learning from these work units?

See the "New Student and QMML Project" thread for a clue.

[VENETO] boboviz

Send message

Joined: 10 Sep 10

Posts: 142

Credit: 388,132

RAC: 0

Level



Scientific publications

Joined: 10 Sep 10Posts: 142Credit: 388,132RAC: 0LevelScientific publications Message 49594 - Posted: 4 Jun 2018 | 7:50:57 UTC - in response to Message 49590.

@Stefan can you quickly describe for us what the Quantum Chemistry Work Units are doing and what you are learning from these work units?

See the "New Student and QMML Project" thread for a clue.



Really a "clue".

"We are simulating molecules". Ok, very precise :-P



Stefan

Volunteer moderator

Project developer

Project scientist

Send message

Joined: 5 Mar 13

Posts: 348

Credit: 0

RAC: 0

Level



Scientific publications

Joined: 5 Mar 13Posts: 348Credit: 0RAC: 0LevelScientific publications Message 49595 - Posted: 4 Jun 2018 | 13:11:08 UTC

Last modified: 4 Jun 2018 | 13:23:27 UTC

Yeah sorry. I've developed a partially healthy paranoia over the last years due to some researchers being unhealthily competitive. In this case I'm trying to steer away from what others do to avoid problems but I'll be a bit vague anyway just to be safe.



Practically we are trying to teach a neural network to calculate molecular energies and forces. QM calculations are horribly slow and scale quadratically to the number of atoms. But a network trained on QM data is orders of magnitude faster, scales linearly with the numbers of atoms and achieves decently good accuracy. We believe these networks are the future for molecular simulations so we try to work with them and see what problems we can apply them to. At the moment they are still slower than usual MD simulations which we used to do but they should be much more accurate given enough training data. This training data is what is critical and what we are trying to produce currently.



Currently there are already three or so groups working on such networks and they have shown great results so we try to mostly collaborate with them to avoid duplication of effort and clash of research topics.

If you want to read up on some great projects that inspired us, check out ANI1 https://arxiv.org/abs/1610.08935 TensorMol https://arxiv.org/abs/1711.06385 DeepMD https://arxiv.org/abs/1712.03641



On applications to more biological research and implications to disease related research you will have to wait for my publication :)

Very interesting, thank you :)

[VENETO] boboviz

Send message

Joined: 10 Sep 10

Posts: 142

Credit: 388,132

RAC: 0

Level



Scientific publications

Joined: 10 Sep 10Posts: 142Credit: 388,132RAC: 0LevelScientific publications Message 49597 - Posted: 4 Jun 2018 | 16:05:50 UTC - in response to Message 49595.

Thank you!!

Thank you for sharing more info Stefan!

Yes, thank you, much appreciated.

Practically we are trying to teach a neural network to calculate molecular energies and forces.

As you well know, the new Nvidia cards are said to be designed for "deep learning". Maybe that will be of some use to you someday.

Stefan

Volunteer moderator

Project developer

Project scientist

Send message

Joined: 5 Mar 13

Posts: 348

Credit: 0

RAC: 0

Level



Scientific publications

Joined: 5 Mar 13Posts: 348Credit: 0RAC: 0LevelScientific publications Message 49601 - Posted: 5 Jun 2018 | 7:44:18 UTC - in response to Message 49600.

Yes, these quantum chemistry potential networks are a deep learning application :) So we train them on our local NVIDIA GPUs. But for the moment I don't see a need to distribute the training of the networks to GPUGRID if that's what you meant. It trains fast enough locally.

OK, that gives me a better idea of how we fit into the overall scheme of things. I am glad we can do that work for you.

Thank you for answering my questions Stefan.

I've been running the GPU tasks on a 3570k (along side some other projects) on Ubuntu 14.04, just 4 threads so no chance of the simultaneous start issue. All good so far.

[VENETO] boboviz

Send message

Joined: 10 Sep 10

Posts: 142

Credit: 388,132

RAC: 0

Level



Scientific publications

Joined: 10 Sep 10Posts: 142Credit: 388,132RAC: 0LevelScientific publications Message 49605 - Posted: 5 Jun 2018 | 13:08:28 UTC - in response to Message 49601.

Last modified: 5 Jun 2018 | 13:08:41 UTC

So we train them on our local NVIDIA GPUs. But for the moment I don't see a need to distribute the training of the networks to GPUGRID if that's what you meant. It trains fast enough locally.



I cannot understand.

Are we crunching GpuGrid QM to "prepare" data for your local/internal GPU?

Stefan

Volunteer moderator

Project developer

Project scientist

Send message

Joined: 5 Mar 13

Posts: 348

Credit: 0

RAC: 0

Level



Scientific publications

Joined: 5 Mar 13Posts: 348Credit: 0RAC: 0LevelScientific publications Message 49606 - Posted: 5 Jun 2018 | 14:01:16 UTC - in response to Message 49605.

Last modified: 5 Jun 2018 | 14:01:56 UTC

Hm ok, maybe you are not familiar with machine learning. Sorry if I glossed over it. In machine learning and specifically supervised learning as in this project you "teach" a network to replicate some ground-truth calculations (in this case QM energy/force calculations).



This means that we take some molecules, position their atoms in 3D space and you guys calculate with QM the energy and forces of this conformation of this molecule.



Then I locally on my computer show a network only the positions of the atoms in space and ask it to predict the energy and forces that you calculated for us (from the QM). This might sound pointless because why are you predicting stuff you already know? Well the great thing about networks is that they are very good interpolators, so if I now give it a molecule (more or less) similar to the ones it was trained on and ask it what is the energy of this molecule, the network will give me an incredibly good estimate of the energy/forces in a few microseconds while with QM I might need minutes to do the same.



Does this clarify it?

Thank you Stefan, this is a fantastic explanation!

+1

[VENETO] boboviz

Send message

Joined: 10 Sep 10

Posts: 142

Credit: 388,132

RAC: 0

Level



Scientific publications

Joined: 10 Sep 10Posts: 142Credit: 388,132RAC: 0LevelScientific publications Message 49619 - Posted: 6 Jun 2018 | 13:18:37 UTC - in response to Message 49606.

Does this clarify it?



Very, very clear.

Thank you!!

For some reason, that approach appeals to me a lot as an ideal distributed computing project. It allows you to offload the slow calculations, while you retain the flexibility of asking a lot of different questions on the returned data that you can investigate with your high-speed calculations. It does not appear to require excessive bandwidth to shuttle the data back and forth (though I have a lot if needed), nor does it require a lot of memory (though I have plenty of that too if you need it).



Also, it does not appear to be tied to a particular type of molecule or disease, but should lend itself to a wide range of subjects. And lastly, aside from a few startup glitches, it seems to be reliable on home computers, and we need not constantly fight to keep it running in the face of errors or crashes. I hope, and expect, that you will achieve significant success with it.

Tomas Brada

Send message

Joined: 3 Nov 15

Posts: 38

Credit: 2,015,431

RAC: 0

Level



Scientific publications

Joined: 3 Nov 15Posts: 38Credit: 2,015,431RAC: 0LevelScientific publications Message 50855 - Posted: 11 Nov 2018 | 18:47:01 UTC

I am running QC on my box currently and there does not appear any startup glitches. It can run two and it can start multiple at the same time. I see there is some usage of flock in the app, and that seems to solve the startup crashes that everyone is complaining. So, Good job.

Pity that there are no more amd gpu tasks, after I acquired two such gpus.

____________



No Chems for me;)

The GPU-WU runs best under my Win7 64 bit-System (Xeon e3-1230 v2 + NVIDIA GTX 1050 +8 Gig RAM).

So, over VMWare 14 Ubuntu 18.04 with newest boinc-client (4 cores, 4 Gig ram, 100 gig HD-Space). No Quantum Chemistry available, he says. Solutions?

Sorry, long night, no patience. Say me i have forget some infos.

So, over VMWare 14 Ubuntu 18.04 with newest boinc-client (4 cores, 4 Gig ram, 100 gig HD-Space). No Quantum Chemistry available, he says. Solutions?



When I started processing QC WUs, I had to configure GPUGrid preferences to use CPU and accept QC tasks.

Then, I had to upgrade several of my Linux sytems from 4 to 8 GB RAM.

4 GB Systems didn't get QC WUs. This seems to be your problem.

And I also had to upgrade two systems with 64 GB SSDs to bigger sizes because a lack of HD space.

You're wellcome. Good Luck!

mmmh, i think i can upgrade ram to 16 GB. CPU, QC allowed. You think, the ram is the problem... not the virtualmachine self?

For sure, Quantum Chemistry WUs won't be sent to 4 GB RAM systems.

Also, BOINC Client must "think" VirtualBox is installed.

BOINC Manager log should show similar to last line in this example, coming from one of my systems running QC WUs successfully...



Sun 03 Feb 2019 19:37:31 WET | | Starting BOINC client version 7.6.31 for x86_64-pc-linux-gnu

Sun 03 Feb 2019 19:37:31 WET | | log flags: file_xfer, sched_ops, task

Sun 03 Feb 2019 19:37:31 WET | | Libraries: libcurl/7.47.0 OpenSSL/1.0.2g zlib/1.2.8 libidn/1.32 librtmp/2.3

Sun 03 Feb 2019 19:37:31 WET | | Data directory: /var/lib/boinc-client

Sun 03 Feb 2019 19:37:31 WET | | CUDA: NVIDIA GPU 0: GeForce GTX 1050 Ti (driver version 415.27, CUDA version 10.0, compute capability 6.1, 4032MB, 3613MB available, 2138 GFLOPS peak)

Sun 03 Feb 2019 19:37:31 WET | | OpenCL: NVIDIA GPU 0: GeForce GTX 1050 Ti (driver version 415.27, device version OpenCL 1.2 CUDA, 4032MB, 3613MB available, 2138 GFLOPS peak)

Sun 03 Feb 2019 19:37:32 WET | | Host name: ServicEnginIC

Sun 03 Feb 2019 19:37:32 WET | | Processor: 4 GenuineIntel Intel(R) Core(TM)2 Quad CPU Q9550 @ 2.83GHz [Family 6 Model 23 Stepping 10]

Sun 03 Feb 2019 19:37:32 WET | | Processor features: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx lm constant_tsc arch_perfmon pebs bts rep_good nopl cpuid aperfmperf pni dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm sse4_1 xsave lahf_lm pti tpr_shadow vnmi flexpriority dtherm

Sun 03 Feb 2019 19:37:32 WET | | OS: Linux: 4.15.0-43-generic

Sun 03 Feb 2019 19:37:32 WET | | Memory: 7.79 GB physical, 16.00 GB virtual

Sun 03 Feb 2019 19:37:32 WET | | Disk: 424.24 GB total, 256.27 GB free

Sun 03 Feb 2019 19:37:32 WET | | Local time is UTC +0 hours

Sun 03 Feb 2019 19:37:32 WET | | VirtualBox version: 5.1.38_Ubuntur122592

.

.

Also, BOINC Client must "think" VirtualBox is installed.

I have not found VirtualBox to be necessary. My present machine, a Ryzen 2600 (Ubuntu 18.04.1) does not have it installed.



Ryzen2600



1 2/2/2019 9:38:37 PM Starting BOINC client version 7.14.2 for x86_64-pc-linux-gnu

2 2/2/2019 9:38:37 PM log flags: file_xfer, sched_ops, task

3 2/2/2019 9:38:37 PM Libraries: libcurl/7.58.0 OpenSSL/1.1.0g zlib/1.2.11 libidn2/2.0.4 libpsl/0.19.1 (+libidn2/2.0.4) nghttp2/1.30.0 librtmp/2.3

4 2/2/2019 9:38:37 PM Data directory: /var/lib/boinc-client

5 2/2/2019 9:38:37 PM CUDA: NVIDIA GPU 0: GeForce GTX 750 Ti (driver version 390.77, CUDA version 9.1, compute capability 5.0, 2001MB, 1937MB available, 1472 GFLOPS peak)

6 2/2/2019 9:38:37 PM [libc detection] gathered: 2.27, Ubuntu GLIBC 2.27-3ubuntu1

7 2/2/2019 9:38:37 PM Host name: Ryzen2600

8 2/2/2019 9:38:37 PM Processor: 12 AuthenticAMD AMD Ryzen 5 2600 Six-Core Processor [Family 23 Model 8 Stepping 2]

9 2/2/2019 9:38:37 PM Processor features: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf pni pclmulqdq monitor

10 2/2/2019 9:38:37 PM OS: Linux Ubuntu: Ubuntu 18.04.1 LTS [4.15.0-43-generic|libc 2.27 (Ubuntu GLIBC 2.27-3ubuntu1)]

11 2/2/2019 9:38:37 PM Memory: 15.66 GB physical, 1.86 GB virtual

12 2/2/2019 9:38:37 PM Disk: 411.52 GB total, 377.77 GB free

13 2/2/2019 9:38:37 PM Local time is UTC -5 hours

14 GPUGRID 2/2/2019 9:38:37 PM Found app_config.xml

15 Rosetta@home 2/2/2019 9:38:37 PM Found app_config.xml

16 2/2/2019 9:38:37 PM Config: allow multiple clients

17 2/2/2019 9:38:37 PM Config: GUI RPC allowed from any host

18 2/2/2019 9:38:37 PM Config: GUI RPCs allowed from:

19 2/2/2019 9:38:37 PM 192.168.0.107

20 2/2/2019 9:38:37 PM Config: use all coprocessors

21 GPUGRID 2/2/2019 9:38:37 PM URL http://www.gpugrid.net/; Computer ID 497979; resource share 100



I have not found VirtualBox to be necessary. My present machine, a Ryzen 2600 (Ubuntu 18.04.1) does not have it installed.



Yes, that's right.

VirtualBox is not necessary for QC WUs to run.

I mixed requirements for another of my CPU projects, LHC@Home... Excuse me ;-)

I mixed requirements for another of my CPU projects, LHC@Home... Excuse me ;-)

I run both also; I usually just install it anyway.



Good then a buy 8 GB extra. Hope the aged electrical capacitor from my over 5 year old mainboard get not killed...

And now i am blind. Exist a Ubuntu 18.04.1 64 Bit i386-Version or im stupid to look?

The "Desktop image" works for me on both AMD (Ryzen) and Intel (i7-8700).

Their description is a bit confusing though.

http://releases.ubuntu.com/18.04/

Plaka

Send message

Joined: 24 Apr 15

Posts: 3

Credit: 7,892,625

RAC: 0

Level



Scientific publications

Joined: 24 Apr 15Posts: 3Credit: 7,892,625RAC: 0LevelScientific publications Message 51456 - Posted: 10 Feb 2019 | 9:40:34 UTC

Hi what is the requirement i have 4 go ram / i5-8400 on my linux. It ok ?



Witch is the command for helping ?



--project_attach http://www.gpugrid.net account_key ???





Thx

____________



tullio

Send message

Joined: 8 May 18

Posts: 167

Credit: 70,146,407

RAC: 220,768

Level



Scientific publications

Joined: 8 May 18Posts: 167Credit: 70,146,407RAC: 220,768LevelScientific publications Message 51458 - Posted: 10 Feb 2019 | 13:27:05 UTC

Last modified: 10 Feb 2019 | 13:29:56 UTC

I installed a VirtualBox Linux on a Windows 10 PC but the BOINC Manager supplied by SuSE does not work. So I attached it to Einstein@home by a line command using the key I know. But I don't know the GPUGRID key. Also, I installed the nVidia driver for my GTX 750 board, but the Einstein client does not recognize it.

Tullio

____________



[VENETO] boboviz

Send message

Joined: 10 Sep 10

Posts: 142

Credit: 388,132

RAC: 0

Level



Scientific publications

Joined: 10 Sep 10Posts: 142Credit: 388,132RAC: 0LevelScientific publications Message 51459 - Posted: 10 Feb 2019 | 14:53:14 UTC - in response to Message 51458.

I installed a VirtualBox Linux on a Windows 10 PC but the BOINC Manager supplied by SuSE does not work.



I'm using Linux Mint 18.1(Serena) on my Win10 machine and it runs well with boinc and gpugrid

Hi, we need more CPUs on Linux to run QM simulations. Anybody can help?

If you give me your key I can make a manual connection from this Virtual Machine as I have done with Einstein@home. The SuSE BOINC manager does not work on the Virtual Machine. You can PM me.

Tullio



Hi, we need more CPUs on Linux to run QM simulations. Anybody can help?

If you give me your key I can make a manual connection from this Virtual Machine as I have done with Einstein@home. The SuSE BOINC manager does not work on the Virtual Machine. You can PM me.

Tullio





I believe that's your weak account key listed on your GPUGrid account page.



boinccmd --project_attach project_url your_weak_account_key





Hi, we need more CPUs on Linux to run QM simulations. Anybody can help?

Just place your QM Linux app into a Virtualbox container to make it available for Windows clients in order to acquire the largest possible pool of machines - without significant add-on developmental needs.



Michael.

____________

President of Rechenkraft.net - Germany's first and largest distributed computing organization.

Using vbox to get the most users is an oxymoron. The native Linux app already uses a lot of disk space as is.

The native Linux app already uses a lot of disk space as is.

Disk space? Ehm, since when is disk space an issue in 2019? ;-)

RAM could indeed become an issue, though...



Just check the world-wide OS distribution, or even just within the BOINC community - then you know where the true bottle neck is hiding (not that I like it but, unfortunately, it's a fact...).



Michael.



P.S.: Anyway - I just saw that the VM approach had already been mentioned above. I should have read the thread more carefully this time, I guess... ;-)

____________

President of Rechenkraft.net - Germany's first and largest distributed computing organization.

tullio

Send message

Joined: 8 May 18

Posts: 167

Credit: 70,146,407

RAC: 220,768

Level



Scientific publications

Joined: 8 May 18Posts: 167Credit: 70,146,407RAC: 220,768LevelScientific publications Message 51490 - Posted: 14 Feb 2019 | 16:06:23 UTC

Last modified: 14 Feb 2019 | 16:07:18 UTC

On my account page there is no key, either weak or strong. I have both Keys for all my projects, except GPUGRID. I have installed SuSE Linux Leap 15.0 on a Windows 10 PC with Home edition 1809. Then I accepted an update from SuSE and found myself with the Tumbleweed version of SuSE Linux, which is a development version. I connected it manually to Einstein@home, since its BOINC manager does not work.



Tullio

____________



On my account page there is no key, either weak or strong.

Tullio



On the top left quadrant of my GPUGrid account page is where I see my weak account key (bold). If you don't have one then I'm not sure what the problem is with your account.



Account information

Name

Email address

Country United States

Postal code

GPUGRID member since 26 Aug 2008

Change email address | password | other account info

User ID

Used in community function

Weak account key

Provides limited access to your account

The native Linux app already uses a lot of disk space as is.

Disk space? Ehm, since when is disk space an issue in 2019? ;-)

RAM could indeed become an issue, though...



Just check the world-wide OS distribution, or even just within the BOINC community - then you know where the true bottle neck is hiding (not that I like it but, unfortunately, it's a fact...).



Michael.



P.S.: Anyway - I just saw that the VM approach had already been mentioned above. I should have read the thread more carefully this time, I guess... ;-)



30-60GB per task! Times 8 on a 32t BOINC only machine surely adds up. I don't have disk space for that. Multiple people here went out and purchased larger hard drives just for this app. I try to put SSDs in machines and not something like a 1TB for a BOINC only PC.

30-60GB per task! Times 8 on a 32t BOINC only machine surely adds up. I don't have disk space for that. Multiple people here went out and purchased larger hard drives just for this app. I try to put SSDs in machines and not something like a 1TB for a BOINC only PC.

Well, I guess I need to check that on my QM machine: Of course, 30-60 GB per task would indeed be a lot.



Still, I do not understand what exactly should be "oxymoron" in allowing Windows users to ADD to the existing Linux participant pool by simply employing a Virtualbox-based approach where the existing Linux client is wrapped in a virtual Linux machine. For you, nothing would change.



Michael.

____________

President of Rechenkraft.net - Germany's first and largest distributed computing organization.

Using vbox is the opposite of being able to reach the majority of users. Projects with vbox only apps have a much lower participation rate compared to other BOINC projects.

I am running Linux Mint in an Oracle VirtualBox.



I have 108 GB of storage in the VB, but I keep getting the following message in Boinc:



Sat Feb 16 20:10:08 2019 | GPUGRID | Message from server: Quantum Chemistry needs 54846.23MB more disk space. You currently have 2374.23 MB available and it needs 57220.46 MB.

Is there any way to fix this?



Thanks!

Boinc Manager: options/computing preference/disk/memory tab.



Disk space check boxes:



"leave at least" 5 GB free



"use no more than " 95%



It looks like your VM is set up with only 1 processor core. Ideally you'd want 4 cores for the VM since this is a multi-thread app (4 cores per task).



Has anyone got this working with a linux VM on windows?

Using vbox is the opposite of being able to reach the majority of users. Projects with vbox only apps have a much lower participation rate compared to other BOINC projects.

Ehm, once again: The Virtualbox Windows-only users would simply ADD to the existing Linux user pool. To my knowledge there is till no functional Windows CPU app.



Michael.

____________

President of Rechenkraft.net - Germany's first and largest distributed computing organization.

...To my knowledge there is still no functional Windows CPU app.

unfortunately, you are perfectly right :-(

biodoc asked:



Has anyone got this working with a linux VM on windows?

Yes, I have two machines set up that I use occasionally, 489909 and 490622. Host O/S is Windows 10 Home, guest O/S is Ubuntu 18.10.

Yes, I have two machines set up that I use occasionally, 489909 and 490622. Host O/S is Windows 10 Home, guest O/S is Ubuntu 18.10.



Thanks captainjack. It's good to know that's a working option for windows users until the native app is released.

Yes, I have two machines set up that I use occasionally, 489909 and 490622. Host O/S is Windows 10 Home, guest O/S is Ubuntu 18.10.



Thanks captainjack. It's good to know that's a working option for windows users until the native app is released.

Using Ubuntu worked! It did not limit the amount of disk space shown in Boinc when the VM had much more.

Thanks!

tullio

Send message

Joined: 8 May 18

Posts: 167

Credit: 70,146,407

RAC: 220,768

Level



Scientific publications

Joined: 8 May 18Posts: 167Credit: 70,146,407RAC: 220,768LevelScientific publications Message 51533 - Posted: 20 Feb 2019 | 19:31:18 UTC - in response to Message 51491.

Last modified: 20 Feb 2019 | 19:32:31 UTC



On the top left quadrant of my GPUGrid account page is where I see my weak account key (bold). If you don't have one then I'm not sure what the problem is with your account.



Account information

Name

Email address

Country United States

Postal code

GPUGRID member since 26 Aug 2008

Change email address | password | other account info

User ID

Used in community function

Weak account key

Provides limited access to your account



I have only

User ID

GPUGRID member since

Country

Total credit

Recent average credit

Computers



I solved the problem by connecting with a slower computer where BOINC Manager works

Tullio

____________



30-60GB per task! Times 8 on a 32t BOINC only machine surely adds up. I don't have disk space for that. Multiple people here went out and purchased larger hard drives just for this app. I try to put SSDs in machines and not something like a 1TB for a BOINC only PC.

Well, as announced before, I have checked this and I beg to differ but on my i7 octacore machine (hyperthreading active), GPUGRID occupies no more than a total disk space of 4.92 GB - and it continuously works with 6 cores on 2 MT GPUGRID QM work units.



Michael.

____________

President of Rechenkraft.net - Germany's first and largest distributed computing organization.

30-60GB per task! Times 8 on a 32t BOINC only machine surely adds up. I don't have disk space for that. Multiple people here went out and purchased larger hard drives just for this app. I try to put SSDs in machines and not something like a 1TB for a BOINC only PC.

Well, as announced before, I have checked this and I beg to differ but on my i7 octacore machine (hyperthreading active), GPUGRID occupies no more than a total disk space of 4.92 GB - and it continuously works with 6 cores on 2 MT GPUGRID QM work units.



Michael.



A quote from a prior post in this thread.



Sat Feb 16 20:10:08 2019 | GPUGRID | Message from server: Quantum Chemistry needs 54846.23MB more disk space. You currently have 2374.23 MB available and it needs 57220.46 MB.



The admin has said production tasks would need ~60GB. The 30Gb # was for Beta tasks. I don't run these, but others have reported needing large amounts of disk for these.

These WUs are all multi-threaded, yes? Do they require a certain number of threads >1?

They are no longer doing the "monster" ones, either 30 GB or 60 GB.

http://www.gpugrid.net/forum_thread.php?id=4785&nowrap=true#51272



I am seeing a project folder size of only 2.1 GB, even when running 10 work units on my i7-8700, and the slots folder adds another 3.4 GB, where I think they would run.