The makeCluster(detectCores(), outfile = "a.out") statement make a cluster by using the all available cores, and the console output will be direct to a.out file.
The statement registerDoParallel(cl) register the cluster as the foreach parallel backend.
Note, int the foreach statements, we have .packages = c('stringr', 'flock') and .export = ls(globalenv(). The former exposes the specified packages to the context inside the foreach loop and the latter exposes all the declared variable to the foreach loop. Without this, the inside foreach loop cannot see the outside library or variables.
To avoid data race problem when multiple processes/threads writing to the same file, we use flock library as a mutex and wrap the write operation by flock::lock and flock::unlock.
Using mutex can make the processing really slow. The other way to do this is that each process write to its separate file. You can use the process id in the file name. For example,
One thing to notice is that, if you parallel processing include database connections, the above code will fail since the parallel process cannot spawn the database connections. You can use the below code initialize the connections when build the cluster using clusterEvalQ.
Open Visual Studio 2017, select Tools–> Extensions and Updates. Click Online in the left pane, search “Microsoft Reporting Services Projects”. Then click install. You need to close Visual Studio to let the installation begin. When it is done, Tools–> Extensions and Updates. Click Installed in the left pane, search “Microsoft Reporting Services Projects”. Click Enable.
Right click the incompatible project, click Reload. This should solve the problem.
#include <typeinfo>
string s= "a";
int i = 0;
cout<<i<<" "<<typeid(i).name()<<endl;
cout<<s.length()<<" "<<typeid(s.length()).name()<<endl;
cout<<i - s.length()<<" "<<typeid(i - s.length()).name()<<endl;
The output is a very large number (18446744073709551615) instead of -1 as intended (see below).
0 i
1 m
18446744073709551615 m
As the type of s.length() is size_t. size_t is unsigned int or unsigned long depending on the machine used. It seems that the compiler converts the value to an unsigned long type.
To avoid this kind of problem, using something int n = s.length(); and then use this variable to do the calculations.
I did a test on a online compiler(online compiler) with the below code.
#include <iostream>
using namespace std;
int main()
{
int * a = nullptr;
delete a;
return 0;
}
The program compiles and runs with no problem. So it is safe to delete pointer too a nullptr. However, you can not execute delete nullptr; directly since nullptr is a pointer literal.
Recently, I have tried out AWS ParallelCluster which is a Linux based HPC cluster solution. We use Slurm as the scheduler and OpenMPI. When submit jobs to multiple compute, it has various error messages, below is one version of it.
[ip-10-0-19-27][[16152,1],0][btl_tcp_endpoint.c:626:mca_btl_tcp_endpoint_recv_connect_ack] received unexpected process identifier [[16152,1],1]
[ip-10-0-19-27][[16152,1],1][btl_tcp_endpoint.c:626:mca_btl_tcp_endpoint_recv_connect_ack] received unexpected process identifier [[16152,1],0]
[ip-10-0-19-27][[16152,1],2][btl_tcp_endpoint.c:626:mca_btl_tcp_endpoint_recv_connect_ack] received unexpected process identifier [[16152,1],3]
[ip-10-0-19-27][[16152,1],3][btl_tcp_endpoint.c:626:mca_btl_tcp_endpoint_recv_connect_ack] received unexpected process identifier [[16152,1],2]
[ip-10-0-20-194][[16152,1],4][btl_tcp_endpoint.c:626:mca_btl_tcp_endpoint_recv_connect_ack] received unexpected process identifier [[16152,1],5]
[ip-10-0-20-194][[16152,1],5][btl_tcp_endpoint.c:626:mca_btl_tcp_endpoint_recv_connect_ack] received unexpected process identifier [[16152,1],4]
[ip-10-0-20-194][[16152,1],6][btl_tcp_endpoint.c:626:mca_btl_tcp_endpoint_recv_connect_ack] received unexpected process identifier [[16152,1],7]
[ip-10-0-20-194][[16152,1],7][btl_tcp_endpoint.c:626:mca_btl_tcp_endpoint_recv_connect_ack] received unexpected process identifier [[16152,1],6]
It turns out that OpenMPI somehow did not find the network interface. Adding –mca btl_tcp_if_include ens3 command line parameter to mpirun will solve the problem. Here ens3 is the default network interface. You could find it using ifconfig.