I have 2 IP cameras installed and am recording with external audio from a mixer board. Can’t get preview audio (while recording) to work right. With preview on I get double audio echoing. Must be a setup issue??
If you want to listen back to the audio from the mixer without echo, there’s two ways you could do this.
Don’t use XSplit Broadcasters microphone input for your mixer. Instead, add it as an audio device source to the scenes you want the audio on. Right click the source and just make sure “Audio Output” is set to system sound. (IMPORTANT: Make sure your PC’s audio is not being sent through the mixer too as this would cause loop back)
Don’t add audio devices to your scenes, instead go in to XSplit Broadcasters audio settings and set the microphone device to be your audio mixer. Then set the audio preview device to be something other than the “System Sound” device
If we could get a better understanding of your setup though, might help paint a clearer picture.
I’m using Xsplit Broadcaster here as a switcher that is the source of video in video conference calling programs like Skype and Zoom. These programs can easily find the video coming from Xsplit.
How do I do the same thing with audio? Is there a way to configure Xsplit so that it’s output can be set as the default audio “recording” in Windows, and get picked up as the mic or line input, bringing in an Xsplit final mix into Skype (or other video conferencing program)? Right now, these programs only see what is set as the Default Mic in Windows.
That would include Chrome’s settings for mic and camera - these programs see the Xsplit output of video as a camera because it is listed as one of the possible sources. However, with audio, they use the source that is designated the default under Windows Sound.
I would like to be able to play a video clip in Xsplit and have the audio for that clip be heard by my audience on a video conference call, when I am making a presentation. I’d also like to set up an Xsplit scene that would allow me to play a sound effect, if possible.
Could you provide some help on this please?
RIght now, there’s no virtual microphone device for XSplit. You may be able to achieve something like this using virtual audio cables or a program called Voicemeeter. You would set your XSplit speaker as the virtual device and then in the voicemeeter software you would make whatever audio is playing to the device, also play to a virtual microphone device.
Take a look at VoiceMeeter Potato for total control of the audio mix. I found it the other day and it looks like it will help solve many issues.
I am setting up a location “broadcast” system with two cameras and external audio via a USB mixer with pro mics. and stereo sound. This system will be used for live streaming of lectures and performances, not gaming.
I’m familiar with the standard broadcast studio workflow, and also computer video/audio setup and issues.
My project will stream at 1280x720, 30 FPS (which means appx 33 ms / frame, this comes in useful later). I’m running SparkoCam to convert my Canon DSLR into a “webcam”, along with a Razer USB camera as my two live inputs.
My problem is getting A/V sync (ie lip sync( worked out with these two different live video sources.
To investigate this I recorded my clapperboard 10 times via local recording for each of the video inputs. The mic and the camera lens are appx 1/3 meter away from the clapper; remember this is the mic going through my external audio mixer path. No dropped frames reported.
When I load up the locally recorded file in my video editor (Vegas), I can clearly see the offset in frames between the clapper image and sound. The webcam video lags about 4 frames/ 122 ms, the Sparkocam video lags about 8 frames/285 ms. If I punch these into the global audio panel, I indeed can get a good sync… one camera at a time.
So here’s the issue: I don’t see a way for the two different audio delays to be attached to the various current live video sources. I tried to use the individual devices panel, but didn’t have the effect I expected, in fact it wasn’t clear to me what was an input, a send, or an output.
( The A/V routing used in xsplit seems opaque, an overall signal map would be very useful. )
So, what next?
I suppose the issue right now is that all camera devices are behind the microphone audio. The delay option on each of the camera sources will actually delay the video feed itself by x ms.
If the delays are static, my suggestion would normally be to delay the mixer audio from XSplit Broadcasters audio settings by the maximum delay you have which in this case is the sparkocam (285ms). This only makes sense if you’ve set the audio mixer as the mic device, and not added it as a source.
At this point, Canon DSLR audio is now synced. Final issue would be the webcam video, for these devices, I would go in to the properties of those sources in XSplit and set a delay of 163ms (285-122)
This solution would only really work if the audio in to the mixer doesn’t include source audio such as video files and such being played. By this I mean: Audio output of PC -> Mixer -> Audio Input in XSplit. As obviously at this point, all media sources and such would now be delayed by 285ms
Hmmm, I’ll think about and try this, thanks.
Update: For my application this looks like it will work.
Set the “microphone” (ie global audio source) to my USB audio source. If I want a local computer audio source, set it here also.
Set the audio input to none in each camera source (which does not appear to be part of the presentation save/restore, btw) Or maybe can just mute via per scene audio?
Muting the audio sources here prevents the weird feed-forward loops I was having.
Point all the cameras at my clapperboard.
Record a series of clapper strikes locally, view file in editor, calculate the average amount of delay needed for each video source.
Set the longest needed audio delay (for the sparkocam in this case) in the global audio settings.
Delay the video from the other webcams to match.
Recheck, makes sure the sound never precedes the video.
Thanks for your help and the pointers.
It seems like some of the video and audio source parameters aren’t sticky, but it’s workable.
I found the nomenclature of mics and speakers doesn’t map very well to these more complex scenarios. For example, the “System Sound” device is called Line out (Surface Sound) on my computer. And my “mic” is a stereo mixer and USB device.
I suggest source/send or input/output might be a clearer label for signal flow direction?