Bodygram

Visualizing the avatar

Decode the base64 3D avatar returned by the Bodygram Platform and render it in the browser with three.js.

Overview

When you run a photo scan or a stats scan through the Bodygram Platform, the response includes an avatar field containing a 3D model of the scanned body. The model is returned as a standard Wavefront .obj file, base64-encoded inside the JSON response.

Because .obj is a widely-supported format, you can render the avatar with virtually any 3D engine. This guide shows the quickest path: decode the base64 string to an .obj file and load it straight into a three.js scene using the built-in OBJLoader.

The .obj format only describes geometry (vertices and faces). The Bodygram Platform does not return textures or materials — you apply your own lighting and material when rendering.


Avatar response shape

The avatar lives at entry.avatar on any successful scan response:

{
  "entry": {
    "id": "scan_testIDFHsFggF",
    "status": "success",
    "avatar": {
      "data": "<BASE64_ENCODED_OBJ_FILE>",
      "format": "obj",
      "type": "highResolution"
    },
    "measurements": [ /* ... */ ]
  }
}
type Avatar = {
  data: string;            // base64-encoded Wavefront .obj file
  format: 'obj';
  type: 'highResolution';
};
FieldDescription
dataThe .obj file, base64-encoded. Decode it to get the raw text of a standard Wavefront OBJ file.
formatAlways "obj" — Wavefront OBJ.
typeAlways "highResolution" — the high-resolution mesh.

On status: "failure", avatar is null — always check status before reading it.


Render with three.js

OBJLoader is a standard three.js addon that parses OBJ text directly. Since Bodygram returns the OBJ as a base64 string, you decode it with atob and hand the string to loader.parse() — no network request, no file on disk.

The demo below loads a sample scan response, extracts entry.avatar.data, and renders it. Drag to orbit, scroll to zoom. Switch to Paste JSON to try it with a response from your own scan.

Interactive Demo

Scan response

Fetched from /success-response/platform-success-response.json.

No avatar loaded
index.html
<!doctype html>
<html>
<head>
  <meta charset="utf-8" />
  <title>Bodygram Avatar Viewer</title>
  <style>
    html, body { margin: 0; height: 100%; }
    #viewer { width: 100vw; height: 100vh; background: #f5f5f5; }
  </style>

  <!-- Load three.js and its addons from a CDN via an importmap -->
  <script type="importmap">
    {
      "imports": {
        "three": "https://esm.sh/three@0.161.0",
        "three/addons/": "https://esm.sh/three@0.161.0/examples/jsm/"
      }
    }
  </script>
</head>
<body>
  <div id="viewer"></div>

  <script type="module">
    import * as THREE from 'three';
    import { OBJLoader } from 'three/addons/loaders/OBJLoader.js';
    import { OrbitControls } from 'three/addons/controls/OrbitControls.js';

    // 1. Fetch the Bodygram scan response.
    //    In production this is your own server endpoint that proxies the
    //    Bodygram Platform /scans response — never call it directly from
    //    the browser with your API key.
    const res = await fetch('/success-response/platform-success-response.json');
    const { entry } = await res.json();

    if (entry.status !== 'success' || !entry.avatar) {
      throw new Error('Scan did not return an avatar.');
    }

    // 2. Decode the base64 .obj and parse it into a three.js Object3D.
    const objText = atob(entry.avatar.data);
    const object = new OBJLoader().parse(objText);

    // 3. Apply a simple material to every mesh (the .obj has no materials).
    const material = new THREE.MeshStandardMaterial({ color: 0xcccccc });
    object.traverse((child) => {
      if (child.isMesh) child.material = material;
    });

    // 4. Set up the scene and lighting.
    //    Ambient + hemisphere give a bright base; key/fill/back lights
    //    keep every side of the mesh readable as the user orbits around it.
    const scene = new THREE.Scene();
    scene.background = new THREE.Color(0xf5f5f5);
    scene.add(object);
    scene.add(new THREE.AmbientLight(0xffffff, 1.1));
    scene.add(new THREE.HemisphereLight(0xffffff, 0xe5e5e5, 0.9));
    const keyLight = new THREE.DirectionalLight(0xffffff, 1.4);
    keyLight.position.set(1, 2, 3);
    scene.add(keyLight);
    const fillLight = new THREE.DirectionalLight(0xffffff, 0.9);
    fillLight.position.set(-2, 1, -2);
    scene.add(fillLight);
    const backLight = new THREE.DirectionalLight(0xffffff, 0.5);
    backLight.position.set(0, -1, -3);
    scene.add(backLight);

    // 5. Frame the camera around the mesh.
    const box = new THREE.Box3().setFromObject(object);
    const size = box.getSize(new THREE.Vector3()).length();
    const center = box.getCenter(new THREE.Vector3());

    const container = document.getElementById('viewer');
    const camera = new THREE.PerspectiveCamera(
      45,
      container.clientWidth / container.clientHeight,
      0.01,
      size * 10,
    );
    camera.position.copy(center).add(new THREE.Vector3(0, 0, size * 1.5));
    camera.lookAt(center);

    // 6. Render with orbit controls for an interactive viewer.
    const renderer = new THREE.WebGLRenderer({ antialias: true });
    renderer.setSize(container.clientWidth, container.clientHeight);
    renderer.setPixelRatio(window.devicePixelRatio);
    container.appendChild(renderer.domElement);

    const controls = new OrbitControls(camera, renderer.domElement);
    controls.target.copy(center);
    controls.enableDamping = true;

    renderer.setAnimationLoop(() => {
      controls.update();
      renderer.render(scene, camera);
    });
  </script>
</body>
</html>

How it works

The playground mirrors what the vanilla HTML + JS snippet on the right does:

  1. Load three.js from a CDN via an importmap, so you can import * as THREE from 'three' in a plain <script type="module"> with no bundler.
  2. Fetch the scan response from your server (the sample uses the static JSON at /success-response/platform-success-response.json — in production this is your own endpoint that proxies the Bodygram Platform /scans call, so your API key never reaches the browser).
  3. Decode and parseatob(entry.avatar.data) gives you the raw .obj text; new OBJLoader().parse(objText) turns it into a THREE.Object3D.
  4. Apply a material — the .obj has no materials, so every mesh is tagged with a MeshStandardMaterial.
  5. Frame the cameraBox3.setFromObject + the mesh's diagonal gives a reliable default camera distance.
  6. Orbit controls — drop in OrbitControls and drive it from an animation loop for an interactive viewer.

On this page