OpenGL + Jupyter Lab

Discussion in 'Visualization' started by MJAS, Feb 1, 2019.

  1. I work using ssh tunnels from a Windows client to a Linux NVIDIA Titan GPU server, through a VPN. I'm using MuJoCo 1.5 in OpenAI Gym on the server, running my python code in Jupyter Notebooks. Ideally I would like to get the MuJoCo OpenGL graphics working across ssh -X. I tried with MobaXTerm on my Windows box, and can display normal X11 apps, but when I call env.render() I get the error:

    Choosing the latest nvidia driver: /usr/lib/nvidia-415, among ['/usr/lib/nvidia-352', '/usr/lib/nvidia-375', '/usr/lib/nvidia-415']
    Choosing the latest nvidia driver: /usr/lib/nvidia-415, among ['/usr/lib/nvidia-352', '/usr/lib/nvidia-375', '/usr/lib/nvidia-415']
    GLFW error (code %d): %s 65544 b'X11: RandR gamma ramp support seems broken'
    GLFW error (code %d): %s 65542 b'EGL: Failed to get EGL display: Success'
    GLFW error (code %d): %s 65544 b'X11: RandR monitor support seems broken'
    Creating window glfw

    and on the command line I see:
    ERROR: OpenGL version 1.5 or higher required

    I have no idea which of these errors is the one I need to resolve. Maybe I need to upgrade an OpenGL library on the linux server or a graphics driver on my Windows client?

    If I can't get this to work, I am very happy to work without OpenGL and render images to rgb arrays, then use matplotlib to display them. But again I'm stuck when I look at the documentation. Modifying the makefile to use -lmujoco150nogl in place of -lmujoco150 causes errors (lots of undefined references) when compiling several of the cpp files.

    Maybe a 3rd option is the "EGL" version, but how exactly could I make my python code see this version of MuJoCo? Are there some environment settings that switch between the different implementations?

    I am not remotely interested in fast rendering - I just want to be able to see something. Could anyone suggest a way forward for me?
  2. Emo Todorov

    Emo Todorov Administrator Staff Member

    This sounds like a very complicated setup. Getting OpenGL to work properly would be a miracle -- and I don't know how to do it, especially since I don't use the Python wrappers (they are developed independently by OpenAI).

    You can render to bitmaps on the server. See code sample record.cpp. Using EGL is probably your best bet. The NVidia drivers have some documentation how to set it up. I have done it a few times and it is a pain but doable (and the pain is system-specific)

    There is another option though. Instead of rendering on the server, you could send MuJoCo states from the server to your desktop -- mjData.qpos, qvel, etc. Then run another simulation locally, set qpos and qvel in mjData, call mj_forward, and render on your desktop.