828-475-9652

Predicting Light Angles with Synthetic Data: A Technical Case Study

This project explores how synthetic data from Unity, combined with a Convolutional Neural Network (CNN), can predict directional light angles. By streamlining lighting analysis, this approach offers technical artists tools to enhance visual quality, reduce iteration time, and create immersive AR/VR environments. This case study bridges traditional artistry and emerging machine learning techniques.

Key Technical Innovations

  • Refined Synthetic Data Collection: The dataset was generated in Unity by rotating a directional light source within a -150° to 150° range along the Y-axis. This range was chosen to ensure all light angles captured had visible reflections and shading effects from the camera’s perspective. By increasing granularity to a 2.5° step size, a more comprehensive dataset was created for training the CNN.
    directionalLight.transform.rotation = Quaternion.Euler(0, angleY, 0);
    CaptureAndSaveImage(angleY);
  • Optimized Neural Network Architecture: A CNN was designed with three convolutional layers, max-pooling, and dense layers to process 128x128 pixel images. Dropout layers and learning rate scheduling were incorporated to prevent overfitting and stabilize training.
    model = Sequential([ Conv2D(32, kernel_size=(3, 3), activation='relu', input_shape=(128, 128, 3)),
    MaxPooling2D(pool_size=(2, 2)),
    Conv2D(64, kernel_size=(3, 3), activation='relu'),
    MaxPooling2D(pool_size=(2, 2)),
    Flatten(),
    Dense(128, activation='relu'),
    Dropout(0.5),
    Dense(1) ])
  • Improved Data Preprocessing: Images were normalized and split into training, validation, and testing sets. Only visually significant light angles (-150° to 150°) were included, ensuring the model was trained on meaningful data.
    img_array = tf.keras.preprocessing.image.img_to_array(img) / 255.0;
  • Evaluation and Visualization: The trained model achieved a validation Mean Absolute Error (MAE) of ~8.7, accurately predicting light angles from unseen test data. Visualization compared predicted and true angles, showing how the model’s outputs aligned with real-world values.

Challenges and Solutions

Challenge: Early iterations included irrelevant data, such as light angles that produced no visible reflections due to the camera’s position. These extraneous samples hindered the model’s learning ability.
Solution: By limiting the light rotation to a practical range (-150° to 150°) and refining the granularity, the dataset focused on relevant variations in the sphere’s appearance.

Challenge: Predicted values were initially flat due to limited variation in data and insufficient learning rate tuning.
Solution: Adjustments such as exponential learning rate decay, increased data granularity, and early stopping improved the model’s performance.

Challenge: Redundant data where light angles outside the visible range added noise to the training process.
Solution: Detailed data analysis led to restricting the dataset to the practical range (-150° to 150°), enhancing both training speed and model accuracy.

Practical Applications

  • Lighting Estimation for AR/VR: Predicting light angles enables realistic lighting adjustments in augmented or virtual environments, making virtual objects blend seamlessly with real-world settings.
  • Automated Lighting Analysis: Technical artists can use similar CNNs to analyze in-game lighting setups, suggesting adjustments for optimal visual quality.
  • Photorealistic Rendering: Predictive models can inform real-time ray tracing or baked lighting systems in game engines, reducing manual trial and error.
  • Procedural Lighting Systems: Predictive models can enhance procedural world generation by automatically adapting lighting conditions based on geometry and player movement.
  • Enhanced Artist-Programmer Collaboration: Automating optimal lighting angle detection allows technical artists to focus on creative tasks while minimizing iterative adjustments with developers.

Validation Results and Next Steps

The model achieved a validation Mean Absolute Error (MAE) of ~8.7, representing a promising step toward accurate light angle prediction. Future steps include integrating more complex scene geometries, testing with various material types, and exploring domain adaptation techniques to apply the trained model to real-world photography. These expansions could further extend its practical utility in gaming, AR/VR, and procedural content generation.

Conclusion

This project highlights how synthetic data from game engines like Unity can bridge the gap between traditional workflows and machine learning. By focusing on a targeted problem—predicting light angles—this work showcases the potential for CNNs to enhance workflows in technical artistry and beyond. The methodology provides a foundation for exploring broader applications in gaming, AR/VR, and procedural content generation.

EnvironmentalWizard@gmail.com