I would like to ask you guys something: We are trying tests of render videos in the Ogg theora format and in the AVI format (with the use of Divx codec). The Source data for this are the BMP pictures converted to NONE-compresion AVI video file. That means absolutelly 100% quality, no compresion of the source data.
Problem is, that even when we set a relatively high quality of the Ogg Theora video, we do not get even at least the same quality as with the AVI (Divx) format (codec). Even when the Ogg theora has twice the data-size (10 MB) of the AVI format (5MB), the quality is still worse (you do not have to bee even an expert in video to see the difference). The AVI (divx) is sharper and better, the "ogg" is blurred and worse.
When the data sizes of both- Theora (5MB) and AVI (5MB) were similiar, the OGG was a lot worse then avi. So we gave ogg theora a twice bitrate (10MB data-size file in final) then before but still it is worse then the old avi (with half of the size of Theora video).
I would like to ask, if we are doing something incorrectly, or is it simply imposible to get a better quality from OGG theora videos (if we still want to keep at least similiar data-size of the video). We are using a program called SUPER for the conversion to OGG theora.
The video has 800*600px and 25 seconds.
(Sorry for english, im writting it in a hurry)