User Tools

Site Tools


tutorials:grolink-on-kubernetes

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
tutorials:grolink-on-kubernetes [2024/12/02 15:18] – [Running on all pods] timtutorials:grolink-on-kubernetes [2024/12/02 15:38] (current) – [Generate input data with SALib] tim
Line 270: Line 270:
 </code> </code>
  
-''2**4'' describes the number of input sets, we will define this very low by now (16) because we work on a simulated cluster and don't want to cause trouble.+''%%2**4%%'' describes the number of input sets, we will define this very low by now (16) because we work on a simulated cluster and don't want to cause trouble.
  
  
-==== Linking processes to API servers  ====+==== Linking processes/"workers" to API servers  ====
  
  
-==== Update growth function ====+First we create a list with links to all the API servers: 
 +<code python> 
 +links=[] 
 +selector {'app': 'grolink'
 +for podS in kr8s.get("pods", namespace="grolinktutorial", label_selector=selector): 
 +    links.append(GroPy.GroLink("http://"+podS.status.podIP+":58081/api/")) 
 +</code>
  
-==== Run it all ====+Then we can use this list to create a queue long enough to "feed" all "workers" with the links. 
 +<code python> 
 +WORKERCOUNT =
 +pods multiprocessing.Queue() 
 +len(links) 
 +for i in range(0,WORKERCOUNT): 
 +    pods.put(links[i%n]) 
 +</code> 
  
 +This queue is required so that the workers can be initialized in parallel using the following function:
 +
 +<code python>
 +#initialize each worker
 +def init_worker(function,queue ):
 +    function.cursor = queue.get().openWB(content=open("model.gsz",'rb').read()).run().read()
 +</code>
 +
 +The function.cursor will then later be defined for each worker, by emptying the given queue.
 +
 +
 +==== The actual function ====
 +
 +The actual growth function is no much different to the one we used above to test our model for the first time. Only that we can use the variable grow.cursor as a workbench because we know already that it will be initialized in that way. And we only get one tuple as an input parameter from the ASlib function, so we split it in the first line: 
 +
 +<code python>
 +# the actual execution 
 +def grow(val):
 +    lenV, angle = val
 +    results = []
 +    #overwrite the parameters in the file
 +    grow.cursor.updateFile("param/parameters.rgg",bytes("""
 +            static float lenV="""+str(lenV)+""";
 +            static float angle="""+str(angle)+""";
 +            """,'utf-8')).run()
 +    grow.cursor.compile().run()
 +    for x in range(0,10): #execute 10 times
 +        data=grow.cursor.runRGGFunction("run").run().read()
 +        results.append(float(data['console'][0]))
 +    return results
 +</code>
 +
 +==== Running and saving ====
 +
 +In the final step we initialize a multiprocessing pool using the init_worker function, with the grow function and pods queue as parameters and map this pool on the generated input values. 
 +
 +Finally we can transfrom and save our result in an csv file.
 +<code python>
 +# Multi processing
 +pool = multiprocessing.Pool(processes=WORKERCOUNT,initializer=init_worker, initargs=(grow,pods,))
 +results = pool.map(grow,param_values)
 +pool.close()
 +y = np.array(results)
 +
 +# save result
 +np.savetxt("result.csv", y, delimiter=",")
 +</code>
 +
 +
 +==== Running It ====
 +
 +After we put this all together and run it as we did above, we can read our csv file through our terminal pod:
 +<code bash>
 +kubectl -n grolinktutorial exec  terminal-XXXX cat  result.csv
 +</code>
 +
 +For simplicity you can find the last python code here in one file:
 +<code python>
 +import numpy as np
 +from SALib.sample import saltelli
 +from GroPy import GroPy
 +import multiprocessing
 +import kr8s
 +from kr8s.objects import Pod
 +
 +WORKERCOUNT =9
 +
 +# defining the problem
 +problem = {
 +    'num_vars': 2,
 +    'names': ['lenV', 'angle'],
 +    'bounds': [[0.1, 1],[30, 70]]
 +}
 +param_values = saltelli.sample(problem, 2**2) # create parameter set
 +
 +#creating a link for each pod
 +links=[]
 +selector = {'app': 'grolink'}
 +for podS in kr8s.get("pods", namespace="grolinktutorial", label_selector=selector):
 +    print("x"+podS.status.podIP)
 +    links.append(GroPy.GroLink("http://"+podS.status.podIP+":58081/api/"))
 +
 +# create an queue to assign pods to workers 
 +pods = multiprocessing.Queue()
 +n = len(links)
 +for i in range(0,WORKERCOUNT):
 +    pods.put(links[i%n])
 +
 +#initialize each worker
 +def init_worker(function,pods ):
 +    function.cursor = pods.get().openWB(content=open("model.gsz",'rb').read()).run().read()
 +
 +# the actual execution 
 +def grow(val):
 +    lenV, angle = val
 +    results = []
 +    #overwrite the parameters in the file
 +    grow.cursor.updateFile("param/parameters.rgg",bytes("""
 +            static float lenV="""+str(lenV)+""";
 +            static float angle="""+str(angle)+""";
 +            """,'utf-8')).run()
 +    grow.cursor.compile().run()
 +    for x in range(0,10): #execute 10 times
 +        data=grow.cursor.runRGGFunction("run").run().read()
 +        results.append(float(data['console'][0]))
 +    return results
 +    
 +# Multi processing
 +pool = multiprocessing.Pool(processes=WORKERCOUNT,initializer=init_worker, initargs=(grow,pods,))
 +results = pool.map(grow,param_values)
 +pool.close()
 +y = np.array(results)
 +
 +# save result
 +np.savetxt("result.csv", y, delimiter=",")
 +
 +</code>
tutorials/grolink-on-kubernetes.1733149098.txt.gz · Last modified: 2024/12/02 15:18 by tim