nanovms / ops-examples Goto Github PK
View Code? Open in Web Editor NEWA repository of basic and advanced examples using Ops
A repository of basic and advanced examples using Ops
Showcase with examples as to how to create a package from different dockers such as one with node, one with python one with R and so on. showcase commands as to how to execute scripts in their respective language using these nanovms.
I would want to run a flask application. I was following the doc specified in here. But, Seeing the below exception -
~/r/p/u/p/s ❯❯❯ ops pkg load python_3.8.6 -c config.json
booting /Users/rams/.ops/images/python3 ...
en1: assigned 10.0.2.15
Traceback (most recent call last):
File "/Applications/anaconda3/envs/simpleflaskenv/bin/flask", line 5, in <module>
from flask.cli import main
ModuleNotFoundError: No module named 'flask'
exit status 3
I have the flask.cli in the given path. When I try to import the same in python repl, it works good.
Below is the config.json
{
"Env": { "FLASK_APP": "hi.py" },
"MapDirs": {"/Users/rams/.local/*": "/Users/rams/.local" },
"Args": ["/Applications/anaconda3/envs/simpleflaskenv/bin/flask", "run", "--port=8080", "--host=0.0.0.0"],
"Files": ["hi.py"]
}
might be worth exploring the erlang POC more to provide a more natural ready-to-use pkg
thought we had a zig example
eyberg@box:~/zt$ ops run zig-out/bin/zt
booting /home/eyberg/.ops/images/zt.img ...
en1: assigned 10.0.2.15
info: All your codebase are belong to us.
eyberg@box:~/zt$ cat src/main.zig
const std = @import("std");
pub fn main() anyerror!void {
std.log.info("All your codebase are belong to us.", .{});
}
test "basic test" {
try std.testing.expectEqual(10, 3 + 7);
}
I believe this is an issue with system sockets or possibly a missing library.
On this service:
https://github.com/GoogleCloudPlatform/microservices-demo/tree/master/src/emailservice
I followed the instructions found here:
https://github.com/nanovms/ops-examples/tree/master/python/python3.8
logger needs to be patched with the following patch, because the python app was built in 3.7:
@@ -33,7 +33,7 @@
def getJSONLogger(name):
logger = logging.getLogger(name)
handler = logging.StreamHandler(sys.stdout)
- formatter = CustomJsonFormatter('(timestamp) (severity) (name) (message)')
+ formatter = CustomJsonFormatter('%(timestamp)s %(level)s %(name)s %(message)s')
handler.setFormatter(formatter)
logger.addHandler(handler)
logger.setLevel(logging.INFO)
It will work locally, but when loaded in ops with the following:
ops pkg load python_3.8.6 -c config.json -p 5055
{
"Env": {
"PORT": "5055",
"DISABLE_PROFILER": "1"
},
"Files": [
"email_server.py",
"email_client.py",
"logger.py",
"demo_pb2.py",
"demo_pb2_grpc.py"
],
"MapDirs": {
"./.venv/*": "/.local",
"./usr/lib64/*": "/usr/lib/x86_64-linux-gnu"
},
"Dirs": [
"templates"
],
"Args": [
"email_server.py"
]
}
tree usr
usr
└── lib64
├── librt.so.1
└── libstdc++.so.6
1 directory, 2 files
I receive the following error:
booting /home/jason/.ops/images/python3 ...
en1: assigned 10.0.2.15
/.local/lib/python3.8/site-packages/googlecloudprofiler/client.py:167: SyntaxWarning: "is" with a literal. Did you mean "=="?
if len(self._profilers) is 0:
{"timestamp": 1642079157.4119132, "level": null, "name": "emailservice-server", "message": "starting the email service in dummy mode.", "severity": "INFO"}
{"timestamp": 1642079157.4135542, "level": null, "name": "emailservice-server", "message": "Profiler disabled.", "severity": "INFO"}
{"timestamp": 1642079157.4146888, "level": null, "name": "emailservice-server", "message": "Tracing enabled.", "severity": "INFO"}
en1: assigned FE80::D4BF:8FFF:FE65:9B4F
{"timestamp": 1642079166.4235234, "level": null, "name": "emailservice-server", "message": "Tracing disabled.", "severity": "INFO"}
E0113 13:06:06.438411677 1 cpu_linux.cc:50] Cannot determine number of CPUs: assuming 1
epoll_ctl error: add: EPOLLEXCLUSIVE not supported
epoll_ctl error: add: EPOLLEXCLUSIVE not supported
{"timestamp": 1642079166.4669502, "level": null, "name": "emailservice-server", "message": "listening on port: 5055", "severity": "INFO"}
getsockopt error: getsockopt unimplemented optname: fd 5, level 1, optname 15
E0113 13:06:06.471703924 1 socket_utils_common_posix.cc:222] check for SO_REUSEPORT: {"created":"@1642079166.471676617","description":"No message of desired type","errno":42,"file":"src/core/lib/iomgr/socket_utils_common_posix.cc","file_line":201,"os_error":"No message of desired type","syscall":"getsockopt(SO_REUSEPORT)"}
E0113 13:06:06.476451652 1 server_chttp2.cc:40] {"created":"@1642079166.476384756","description":"No address added out of total 1 resolved","file":"src/core/ext/transport/chttp2/server/chttp2_server.cc","file_line":306,"referenced_errors":[{"created":"@1642079166.476381574","description":"Failed to add any wildcard listeners","file":"src/core/lib/iomgr/tcp_server_posix.cc","file_line":340,"referenced_errors":[{"created":"@1642079166.476370547","description":"Unable to configure socket","fd":5,"file":"src/core/lib/iomgr/tcp_server_utils_posix_common.cc","file_line":214,"referenced_errors":[{"created":"@1642079166.476366753","description":"Operation not supported","errno":95,"file":"src/core/lib/iomgr/socket_utils_common_posix.cc","file_line":242,"os_error":"Operation not supported","syscall":"getsockopt(TCP_NODELAY)"}]},{"created":"@1642079166.476380530","description":"Unable to configure socket","fd":5,"file":"src/core/lib/iomgr/tcp_server_utils_posix_common.cc","file_line":214,"referenced_errors":[{"created":"@1642079166.476377900","description":"Operation not supported","errno":95,"file":"src/core/lib/iomgr/socket_utils_common_posix.cc","file_line":242,"os_error":"Operation not supported","syscall":"getsockopt(TCP_NODELAY)"}]}]}]}
Traceback (most recent call last):
File "email_server.py", line 204, in <module>
start(dummy_mode = True)
File "email_server.py", line 139, in start
server.add_insecure_port('[::]:'+port)
File "/.local/lib/python3.8/site-packages/grpc/_server.py", line 961, in add_insecure_port
return _common.validate_port_binding_result(
File "/.local/lib/python3.8/site-packages/grpc/_common.py", line 166, in validate_port_binding_result
raise RuntimeError(_ERROR_MESSAGE_PORT_BINDING_FAILED % address)
RuntimeError: Failed to bind to address [::]:5055; set GRPC_VERBOSITY=debug environment variable to see detailed error message.
hi,
the context:
ops instance create -i service.authswitch.img -t onprem -p 6379
booting service.authswitch.img ...
ops instance list -t onprem -z onprem
+-------+----------------------------------------------------------+---------+--------------------------------+-------------+------+
| PID | NAME | STATUS | CREATED | PRIVATE IPS | PORT |
+-------+----------------------------------------------------------+---------+--------------------------------+-------------+------+
| 58178 | /Users/dmitrymedvedev/.ops/images/service.authswitch.img | Running | 2020-05-14 00:18:34.723467212 | 127.0.0.1 | 8080 |
| | | | +0200 CEST | | |
+-------+----------------------------------------------------------+---------+--------------------------------+-------------+------+
As you can see, the PORT reported is 8080, whereas the expected port was the 6379.
NB: The service.authswitch.img image contains a NodeJS code that should connect to a Redis database which resides on the host.
What am I doing wrong to open the 6379? Do I have to open this port at all to have the unikernel be able to connect to an external Redis database?
While testing Consul, I was attempting to use the Consul as instances by themselves and instances in an ASG. I tried with static and dhcp ips.
Consul has a feature that allows instances to autoregister instances with a specific "tag". This does not work on NanoVMs, because it is unable to make https calls.
When Consul boots it is also incapable of checking for updates, which is also and http call to consul servers.
This has been tested with NanoVMs and an Amazon2 image both with the same Security Groups and InstanceProfile. NanoVMs do not work.
This has only been tested with Go. I have used the same code Consul uses (standard AWS SDK) and no results.
package main
import (
"fmt"
"time"
"github.com/aws/aws-sdk-go/aws"
"github.com/aws/aws-sdk-go/aws/session"
"github.com/aws/aws-sdk-go/service/ec2"
)
func main() {
fmt.Println("vim-go")
region := "eu-west-1"
svc := ec2.New(session.New(), &aws.Config{
Region: ®ion,
})
for {
resp, err := svc.DescribeInstances(&ec2.DescribeInstancesInput{})
if err != nil {
fmt.Printf("discover-aws: DescribeInstances failed: %s", err)
} else {
fmt.Println(resp)
}
time.Sleep(10 * time.Second)
}
}
Error logs
ops instance logs awssdktest-1642503817 -t aws -z eu-west-1
en1: assigned 192.168.1.38
vim-go
en1: assigned FE80::437:D3FF:FE2C:B4F1
discover-aws: DescribeInstances failed: RequestError: send request failed
caused by: Post "https://ec2.eu-west-1.amazonaws.com/": dial tcp 54.239.39.230:443: connect: invalid argumentdiscover-aws: DescribeInstances failed: RequestError: send request failed
caused by: Post "https://ec2.eu-west-1.amazonaws.com/": dial tcp 54.239.35.17:443: connect: invalid argument%
ops instance
also does not allow referencing Instance profiles, so currently aws credentials have to be assigned in the config.json.
{
"CloudConfig": {
"Zone": "eu-west-1",
"Platform": "aws",
"BucketName":"yourbucket",
"VPC": "vpc-00000000000000",
"SecurityGroup": "sg-00000000000"
},
"NameServer": "192.168.0.2",
"Env": {
"AWS_ACCESS_KEY": "************************",
"AWS_SECRET_ACCESS_KEY": "***************************************************"
}
}
Locally images do work. In AWS they do not.
Hi,
I would like to host my static blog using nginx package in nanovms. But I can't figure out?
BR
We would want to run a instance of nanovm in production (on our private cloud). Is there any documentation that discusses on soft points, metrics to monitor?
Is there any telemetry that comes out of box from Nanovm? Thanks!
cc: @eyberg
something to do with the symlink, which I haven't looked more into yet but essentially you want something like
"Args" : ["node", "/node_modules/next/dist/bin/next", "start"
vs
"Args" : ["node", "/node_modules/.next//bin/next", "start"
(at least for creating a pkg - for just a pkg load w/node it seems to not care ...)
i have this working elsewhere - need to update ruby pkg && provide example
I tried to build the gpu_nvidia klib driver according to the tutorials on these two pages, but an error occurred.
https://github.com/nanovms/ops-examples/tree/master/python/python3.8/03-tensorflow-gpu
https://nanovms.com/dev/tutorials/gpu-accelerated-computing-nanos-unikernels
I cloned the latest Nanos repository and Nanos NVIDIA GPU klib repository, and then tried to run the build command. The complete error message is as follows
root@server-xy0kgq9t:~/nanos/gpu-nvidia# make NANOS_DIR=/root/nanos/nanos
make -C src/nvidia
make[1]: Entering directory '/root/nanos/gpu-nvidia/src/nvidia'
make[1]: Nothing to be done for 'default'.
make[1]: Leaving directory '/root/nanos/gpu-nvidia/src/nvidia'
make -C kernel-open _out/Nanos_x86_64/gpu_nvidia
make[1]: Entering directory '/root/nanos/gpu-nvidia/kernel-open'
CC _out/Nanos_x86_64/nvidia/nv.o
In file included from /root/nanos/nanos/src/unix/unix_internal.h:3,
from common/inc/nv-nanos.h:6,
from nvidia/nv.c:30:
/root/nanos/nanos/src/kernel/kernel.h:5:10: fatal error: debug.h: No such file or directory
5 | #include <debug.h>
| ^~~~~~~~~
compilation terminated.
make[1]: *** [Makefile:34: _out/Nanos_x86_64/nvidia/nv.o] Error 1
make[1]: Leaving directory '/root/nanos/gpu-nvidia/kernel-open'
make: *** [Makefile:45: gpu_nvidia] Error 2
Please tell me how to solve this problem, thank you very much
both ops run && AOT work
go is fast 'enough' to not need a nginx proxy - it'd be cool to showcase a go letsencrypt
add examples showing how to execute and run a R script
There is some desire to be able to bind sidecar applications to your code such as Kuma.io or Kong API Gateway (declarative with plugins filtering requests) or even OPA agents running locally to the binary.
Is there an example of combining 2 binaries into a single unikernal?
eyberg@s1:~/myApp/bin/Release/netcoreapp2.2/linux-x64$ cat config.json
{
"Files": ["/lib/x86_64-linux-gnu/librt.so.1", "/usr/lib/x86_64-linux-gnu/libicuuc.so.52.1", "/usr/lib/x86_64-linux-gnu/libicui18n.so.52.1", "/usr/lib/x86_64-linux-gnu/libicudata.so.52"],
"MapDirs": {"publish/*": "/" },
"Dirs": ["publish"],
"Env": {
"COMPlus_EnableDiagnostics": "0"
}
}
eyberg@s1:~/myApp/bin/Release/netcoreapp2.2/linux-x64$ ops run -c config.json publish/myApp
booting /home/eyberg/.ops/images/myApp.img ...
assigned: 10.0.2.15
Hello World!
exit status 1
I am struggling at the moment with running an app ( being developed under Mac OS ) in a unikernel. The app has the redis-fast-driver as a dependency.
I see errors regarding incorrect ELF headers.
A short example of the whole workflow of creating a bootable unikernel would be handy.
I'm trying to get cpu information using Python's cpuinfo package, below is my code and configuration file.
test.py
from cpuinfo import get_cpu_info_json
print(get_cpu_info_json())
config.json
{
"MapDirs": {
"./.venv/*": "/.local",
"./usr/lib64/*": "/usr/lib/x86_64-linux-gnu",
"./lib/*": "/lib/x86_64-linux-gnu",
"./lib64/*": "/lib64"
},
"BaseVolumeSz": "2g",
"RunConfig": {
"CPUs": 6,
"Memory": "2G",
"Accel": true
},
"Args": [
"test.py"
],
"Files": [
"test.py"
]
}
When running the image containing the above code, an error occurred. The specific error content is as follows:
running local instance
booting /root/.ops/images/python3 ...
en1: assigned 10.0.2.15
Traceback (most recent call last):
File "test.py", line 3, in <module>
print(get_cpu_info_json())
File "/.local/lib/python3.8/site-packages/cpuinfo/cpuinfo.py", line 2741, in get_cpu_info_json
p1 = Popen(command, stdout=PIPE, stderr=PIPE, stdin=PIPE)
File "/usr/local/lib/python3.8/subprocess.py", line 854, in __init__
self._execute_child(args, executable, preexec_fn, close_fds,
File "/usr/local/lib/python3.8/subprocess.py", line 1637, in _execute_child
self.pid = _posixsubprocess.fork_exec(
OSError: [Errno 38] Function not implemented
en1: assigned FE80::88B3:7BFF:FE85:1B28
exit status 3
According to the error message, the problem seems to be with the subprocess package. I know Nanos Unikernel is a single process operating system, so I guess this is the reason why the error occurs.
If I really need to use Python's cpuinfo package, is there any way to solve this problem?
could prob. just use on the various wasmr runtime pkgs as it produces wasm as output
When I try to adapt HelloWorld.java like this:
import java.net.InetAddress;
public class HelloWorld {
public static void main(String[] args) throws Exception {
System.out.println("Hello World!");
System.out.println(InetAddress.getByName("api.ipify.org"));
}
}
I get
Exception in thread "main" java.net.UnknownHostException: api.ipify.org: System error
at java.base/java.net.Inet4AddressImpl.lookupAllHostAddr(Native Method)
at java.base/java.net.InetAddress$PlatformNameService.lookupAllHostAddr(InetAddress.java:924)
When I run the class inside docker, I have no problem. Native Method
suggests Java Runtime is calling a kernel/glibc method to resolve host.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.