The recent trend in generative modeling is to create 3D-aware generators based on collections of 2D images. To induce a 3D bias, such models typically rely on volumetric rendering, which is expensive to use at high resolution. More than 10 papers have appeared in recent months that address this scaling problem by training a separate 2D decoder to upsample a low-resolution image (or feature tensor) derived from a pure 3D generator. But this solution comes at a cost: not only does it break multiview consistency (i.e., shape and texture change as the camera moves), but it also trains the geometry with low accuracy. In this paper, we show that it is possible to obtain a high-resolution 3D generator with SotA image quality by following a completely different path - simply by training the model in sections.