Rendering scenes observed in a monocular video from novel viewpoints is a chal- lenging problem. For static scenes the community has studied both scene-specific optimization techniques, which optimize on every test scene, and generalized tech- niques, which only run a deep net forward pass on a test scene. In contrast, for dy- namic scenes, scene-specific optimization techniques exist, but, to our best knowl- edge, there is currently no generalized method for dynamic novel view synthesis from a given monocular video. To explore whether generalized dynamic novel view synthesis from monocular videos is possible today, we establish an analy- sis framework based on existing techniques and work toward the generalized ap- proach. We find a pseudo-generalized process without scene-specific appearance optimization is possible, but geometrically and temporally consistent depth esti- mates are needed. Despite no scene-specific appearance optimization, the pseudo- generalized approach improves upon some scene-specific methods. For more information see project page at https://xiaoming-zhao.github.io/projects/pgdvs.