If I were trying to solve this problem, I would render text to a texture,
and then render a side of the cube to the screen using that texture.
I assume that the text is not changing from frame to frame, and that it
would make sense to render the article as text into a texture just once, (or
maybe only when the text is updated or invalidated.)
The interesting part of your problem is that you want lots of cubes, each
with potentially all the information of each article at readable size. This
simply doesn't make sense. You have to choose a level of detail for your
text texture that is appropriate to how it will be seen on the screen. If
only one cube is close at a time, then only that cube needs the full-detail
large-sized texture. Other textures can be offloaded.
It also doesn't make sense to do this because it's very *** video memory
to have the full detail version hundreds of articles pre-rendered in
If I had to guess what you're doing wrong, I would guess that you're either
re-rendering your text to the texture every frame, or that you're just simply
using lots of large textures all the time.
Consider rendering a thumbnail of your text, and use this texture for all
the cubes which are not "pulled close" yet. Then use a large, high-detail
texture for the cube that is pulled close. You will have to fetch or render
this texture on-demand once that article's text is needed.
In an animated environment, you would probably keep two high-detail
textures. One for the article being dismissed, and one for the article being
summoned. (presumably, one is zooming towards you and one is zooming away
from you, or something like that.)
This will be much easier on video memory to only use high-detail textures
for the articles that are close to you, and to use low-detail (smaller)
texture for the articles that are far away (small on the screen.)
Again, I'm assuming the text is not dynamic (the text is not changing from
frame to frame of rendering) Trying to do this with dynamic text for
hundreds of articles doesn't really seem feasable to me. I would "fake" the
far away versions, and use some kind of placeholder texture that looks
approximately right. For example, I can show you a picture of a newspaper
from far away, and you'll know it's a newspaper, but you can't read the text
until I bring it up close to you. From 50 ft, most, if not all newspapers
look the same.)
You have your work cut out for you in fetching/rendering the high-detail
textures on-demand without degrading the rest of your graphics pipeline.
Welcome to the world of resource management! :)