Computer Vision Gaussian Splatting

Fully Volumetric RGB and Normal Rendering for Gaussian Primitives

Extending the stochastic-solid attenuation model to unified volumetric integration of color, normals, depth, and opacity for 3D Gaussian Splatting.

The Problem

3D Gaussian Splatting (3DGS) renders scenes by projecting 3D Gaussians to 2D and alpha-compositing in screen space. Zhang et al. (2026) introduced a stochastic-solid model for volumetric depth rendering, but RGB and normals still use the splatting approximation. This creates inconsistency between channels.

Popping Artifacts

Depth ordering of overlapping Gaussians can flip with viewpoint changes, causing color and normal discontinuities.

Depth-Color Mismatch

When depth is volumetric but color is splatted, the implied surfaces disagree, producing blurry or misaligned normals.

Normal Estimation Bias

Splatting approximates Gaussians as planar ellipses, losing 3D volumetric extent that should influence normals.

Our Approach

We derive and implement fully volumetric rendering integrals for all output channels under the stochastic-solid transmittance model.

Rendering Pipeline

STEP 1
Ray-Gaussian Projection
3D Gaussian reduces to 1D along ray
STEP 2
Importance Sampling
Quadrature near Gaussian peaks
STEP 3
Stochastic-Solid Transmittance
T(t) = ∏ [1 - αG(r(t))]
STEP 4
Volumetric Integration
Color, normals, depth, opacity

Key Equations

Stochastic-Solid Transmittance:
T(t) = ∏i [1 - αi · Gi(r(t))]

Volumetric Color Integral:
C = ∫ [-dT/dt] · c(t) dt

Volumetric Normal (from density gradient):
N = normalize( ∫ [-dT/dt] · n(r(t)) dt )
where n(x) = -∇ρ(x) / |∇ρ(x)|

Closed-Form 1D Ray Parameter:
Gi(r(t)) = pi · exp(-(t - tμ,i)² / 2σt,i²)

Experimental Results

Volumetric vs. Splatting Divergence

The two methods produce substantially different outputs. The Dense scene with 25 Gaussians shows the largest divergence.

Scene# GaussiansRGB RMSENormal MAE (deg)Depth RMSEOpacity RMSE
Simple50.29360.50.8320.421
Moderate120.33062.40.7300.551
Dense250.48665.60.8060.762
Anisotropic100.39357.40.7460.643
Deep Overlap80.330------0.514

Quadrature Convergence

Volumetric rendering converges rapidly with increasing quadrature samples. Even 8 samples dramatically outperform splatting.

SamplesRGB RMSE (vs ref)Normal MAE (deg)Time (s)
81.38e-411.745.20
166.99e-55.165.50
324.17e-51.945.15
482.47e-51.425.58
641.18e-50.746.50
965.95e-60.346.39
1282.80e-60.225.87
Splatting0.32964.0---

Render Time Comparison

Volumetric rendering is competitive with splatting, and actually faster for dense scenes.

SceneVolumetric (s)Splatting (s)Ratio
Simple4.63.81.23x
Moderate7.98.40.94x
Dense10.314.20.72x
Anisotropic8.910.30.87x
Deep Overlap7.67.11.07x

Key Findings

RGB RMSE up to 0.486

Volumetric and splatting renderings diverge substantially in dense, highly overlapping configurations.

Normal error exceeds 65 degrees

Splatting produces fundamentally different normal fields by evaluating normals at projected centers only.

Sub-degree accuracy at 64 samples

Quadrature converges rapidly: 64 samples achieve 0.74-degree normal accuracy vs. splatting's 64-degree error.

Fully differentiable

Gradients flow through quadrature, transmittance, and density-gradient normals for end-to-end optimization.

References

[1] Zhang et al. "Geometry-Grounded Gaussian Splatting." arXiv:2601.17835, Jan 2026.

[2] Kerbl et al. "3D Gaussian Splatting for Real-Time Radiance Field Rendering." ACM TOG, 2023.

[3] Mildenhall et al. "NeRF: Representing Scenes as Neural Radiance Fields." ECCV 2020.

[4] Huang et al. "2D Gaussian Splatting for Geometrically Accurate Radiance Fields." ACM TOG, 2024.

[5] Max. "Optical Models for Direct Volume Rendering." IEEE TVCG, 1995.