SteamVR(HTC Vive) Unity插件深度分析(十五)

发表于2017-06-12
评论0 5.8k浏览

11.          Textures

11.1.   arrow.png

一个箭头形状,在[Status]预制体下面的_Stats子物体上有使用,是用作鼠标的光标的

11.2.   background.png

看起来是一张背景图,在[Status]预制体下面的_Stats子物体上有使用,是用作菜单背景的

11.3.   logo.png

这张SteamVRlogo图在很多地方有用到。比如SteamVR_CameraInspector中:

真正设置这张图的是在Editor脚本里面,包括SteamVR_Editor.csSteamVR_Settings.cs,还有SteamVR_update.cs。刚导入插件时会去检查版本升级,会弹一个带这个logo的对话框。

第一次导入时,会弹设置对话框:

 

有升级时也会弹升级对话框:

 

都会使用这个logo.png

11.4.   workshop.png

这个是立方体矩阵每个立方体表面所用材质中的纹理,参看《6.Materials

12.          v1.1.1

v1.1.1版本的SteamVR插件于2016.07.28发布。readme.txt中简单描述了相应的改变:

l  更新SteamVR运行时到v1467410709版本,更新SDK版本到1.0.2

l  更新版本声明

l  添加了一个SteamVR_TrackedCamera脚本用于访问跟踪相机的视频流及姿态

l  添加了一个SteamVR_TestTrackedCamera场景及相应的关联脚本来演示如何使用SteamVR_TrackedCamera

l  改进了SteamVR_Fader这个shader以适应Unity 5.4的一些变化

l  SteamVR_GameView中使用合成器的镜像纹理来渲染伴随窗口(仅适用于Unity 5.4之前的版本)

l  SteamVR_LoadLevel中的externalApp重命名为internalProcess以反映实际功能

l  修复SteamVR_PlayArea材质加载在Unity 5.4中的bug

l  添加对立体全景图截屏的支持

l  去掉设置Time.maximumDeltaTime的代码,因为这会导致一些问题

 

接下来对v1.1.1相对于v1.1.0的所有改动做一个完整的分析:

12.1.   openvr_api.cs

这个脚本的变化反映了OpenVR SDK的变化(接口变换的详情参看相应的openvr.h的分析):

l  新增IVRTrackedCamera接口

l  IVRApplications新增LaunchApplicationFromMimeTypeSetDefaultApplicationForMimeTypeGetDefaultApplicationForMimeTypeGetApplicationSupportedMimeTypesGetApplicationsThatSupportMimeTypeGetApplicationLaunchArguments函数

l  IVRCompositor新增GetCumulativeStatsGetMirrorTextureD3D11GetMirrorTextureGLReleaseSharedGLTextureLockGLSharedTextureForAccessUnlockGLSharedTextureForAccess

l  IVROverlay新增SetOverlayTexelAspectGetOverlayTexelAspectSetOverlaySortOrderGetOverlaySortOrderGetOverlayTextureSize

l  IVRRenderModels新增GetRenderModelThumbnailURLGetRenderModelOriginalPathGetRenderModelErrorNameFromEnum函数

l  新增IVRScreenshots接口

l  新增IVRResources接口

12.2.   Extras

12.2.1.             SteamVR_TestTrackedCamera.cs

这个是新增的脚本,用于演示如何使用SteamVR_TrackedCamera(这个脚本的作用是操作头显上的前置摄像头),可以先看下面的SteamVR_TrackedCamera.cs小节。

 

public class SteamVR_TestTrackedCamera :MonoBehaviour

{

    要显示视频的的物体上的材质,视频是通过材质上的纹理来显示的。将会使用视频图像       的纹理作为这个材质的纹理

       publicMaterial material;

    这个是要显示视频的目标物体

       publicTransform target;

    这个表示显示的视频是变形还是非变形(经过校正)的

       publicbool undistorted = true;

    视频图像是否要裁剪,原因是经过变形校正后的图像在边缘区域会有些变形

       publicbool cropped = true;

 

       voidOnEnable()

       {

              //The video stream must be symmetrically acquired and released in

              //order to properly disable the stream once there are no consumers.

        这里获取视频流,在SteamVR_TrackedCamera里面会进行缓存。所以这里使用临          时变量没有关系。这里注意会调用底层的IVRTrackedCamera接口,在此之前必须             进行OpenVR环境的初始化,否则这里会失败。可以把这个脚本挂到必定会先进           OpenVR环境初始化的物体下面,比如CameraRig。否则可以在下面的调用之前        加一句“SteamVR.instance”就可以自动完成OpenVR的初始化

              varsource = SteamVR_TrackedCamera.Source(undistorted);

              source.Acquire();

 

              //Auto-disable if no camera is present.

        如果获取不到摄像头,则禁用此脚本。获取不到摄像头,有可能是上面说的没有初              始化的原因,还有一种可能是SteamVR的设置中没有开启摄像头:

       

              if(!source.hasCamera)

                     enabled= false;

       }

 

       voidOnDisable()

       {

        清空纹理,使不显示图像

              //Clear the texture when no longer active.

              material.mainTexture= null;

 

              //The video stream must be symmetrically acquired and released in

              //order to properly disable the stream once there are no consumers.

        每一个Acquire都必须对应一个ReleaseSteamVR_TrackedCamera里面会做引用            计数。引用计数为0才真正释放底层相机。SteamVR_TrackedCamera里面会做缓存,           所以这里取作临时变量没问题

              varsource = SteamVR_TrackedCamera.Source(undistorted);

              source.Release();

       }

 

    Update里面更新视频流图像(一帧一帧)

       voidUpdate()

       {

              varsource = SteamVR_TrackedCamera.Source(undistorted);

              vartexture = source.texture;

              if(texture == null)

              {

                     return;

              }

 

              //Apply the latest texture to the material.  This must be performed

              //every frame since the underlying texture is actually part of a ring

              //buffer which is updated in lock-step with its associated pose.

              //(You actually really only need to call any of the accessors which

              //internally call Update on the SteamVR_TrackedCamera.VideoStreamTexture).

        将最新的纹理赋给材质,从而显示出来。这里通过访问属性来访问                                VideoSteamTexture的,get的时候会自动刷新

              material.mainTexture= texture;

 

              //Adjust the height of the quad based on the aspect to keep the texels square.

        后面会根据视频长宽比调整显示区域的大小

              varaspect = (float)texture.width / texture.height;

 

              //The undistorted video feed has 'bad' areas near the edges where the original

              //square texture feed is stretched to undo the fisheye from the lens.

              //Therefore, you'll want to crop it to the specified frameBounds to remove this.

        这里说的是对于变形校正过的视频在边缘有一些“坏区”,在这部分区域,原始的         方形的纹理会被拉伸以校正透镜形成的鱼眼效果。因此通常要使用指定的                  frameBounds来裁剪掉这部分“坏区”

              if(cropped)

              {

                     varbounds = source.frameBounds;

            使用mainTextureOffset定义纹理的起始位置

                     material.mainTextureOffset= new Vector2(bounds.uMin, bounds.vMin);

                     vardu = bounds.uMax - bounds.uMin;

                     vardv = bounds.vMax - bounds.vMin;

            使用mainTextureScale定义纹理的大小

                     material.mainTextureScale= new Vector2(du, dv);

            考虑坏区后的长宽比

                     aspect*= Mathf.Abs(du / dv);

       }

              else

              {

            不进行裁剪的缺省情况

                     material.mainTextureOffset= Vector2.zero;

            这里的-1Y方向倒过来

                     material.mainTextureScale= new Vector2(1, -1);

              }

 

        根据长宽比调整Y方向的比例,从而调整Y方向的高度

              target.localScale= new Vector3(1, 1.0f / aspect, 1);

 

              //Apply the pose that this frame was recorded at.

        如果有位置跟踪(比如摄像头在头显上,那肯定是跟踪的,并且这个位置是相对于              头显的),让(显示视频的)目标对象跟随而动。这里只是一个示例,target就放在            头显(head)下面,所以直接将位置赋给target就能让它跟随而动了

              if(source.hasTracking)

              {

                     vart = source.transform;

                     target.localPosition= t.pos;

                     target.localRotation= t.rot;

              }

       }

}

 

12.2.2.             SteamVR_TestTrackedCamera.mat

这是配合SteamVR_TestTrackedCamera而定义的一个材质,就是一个简单的使用了Unlit/TextureShader的一个材质。其实随便定义一个使用标准Shader的材质也是可以的

12.2.3.             SteamVR_TestTrackedCamera.unity

这是配合SteamVR_TestTrackedCamera测试的一个测试场景,很简单,就是在CameraRig下面加了一个TrackedCamera的空物体,然后再在下面加了一个Quad用于显示视频:

Material选中SteamVR_TestTrackedCamera这个材质,Target选中TrackedCamera

 

QuadMeshRender里也要选中SteamVR_TestTrackedCamera这个材质:

 

这样就可以了,运行就能看到在正前方的下方看到随动的前置摄像头拍摄的视频了。

12.3.   Resources

12.3.1.             SteamVR_Fade.shader

这个SteamVR_Fade.cs使用的用于对相机场景的渐入/渐出的shader完全重写了。TODO 暂不懂shader语法,具体的改动后面再分析,根据插件更新描述,是为了适应Unity5.4的变化

 

Shader "Custom/SteamVR_Fade"

{

       SubShader

       {

              Pass

              {

                     BlendSrcAlpha OneMinusSrcAlpha

                     ZTestAlways

                     CullOff

                     ZWriteOff

 

                     CGPROGRAM

                            #pragmavertex MainVS

                            #pragmafragment MainPS

 

                            float4fadeColor;

 

                            float4MainVS( float4 vertex : POSITION ) : SV_POSITION

                            {

                                   returnvertex.xyzw;

                            }

 

                            float4MainPS() : SV_Target

                            {

                                   returnfadeColor.rgba;

                            }

                     ENDCG

              }

       }

}

12.4.   Scripts

12.4.1.             SteamVR.cs

这个脚本只做了一点微小的改动,增强健壮性:

 

public static bool enabled

{

       get

       {

#if !(UNITY_5_3 ||UNITY_5_2 || UNITY_5_1 || UNITY_5_0)

              if(!UnityEngine.VR.VRSettings.enabled)

                     enabled= false;

#endif

              return_enabled;

       }

       set

       {

              _enabled= value;

              if(!_enabled)

                     SafeDispose();

       }

}

 

改动就是在非5.x版本,是使用Unity原生的VR支持的,会根据UnityVR支持是否打开来决定自身是否禁用。

12.4.2.             SteamVR_Fade.cs

这个用于对某个相机的场景进行渐入/渐出的脚本的改变主要是为了配合上面的SteamVR_Fade.shader的变化。新的shader脚本要求传入一个fadeColor的参数。

为此新增了一个变量,保存的是shader中变量的id

static int fadeMaterialColorID = -1;

 

OnEnable中获取了这个id

void OnEnable()

{

       if(fadeMaterial == null)

       {

              fadeMaterial= new Material(Shader.Find("Custom/SteamVR_Fade"));

        可以学习这种与shader中的变量交互的做法

              fadeMaterialColorID= Shader.PropertyToID("fadeColor");

       }

 

       SteamVR_Utils.Event.Listen("fade",OnStartFade);

       SteamVR_Utils.Event.Send("fade_ready");

}

 

然后在OnPostRender中进行最终的渐变处理的时候,直接将颜色(透明度)设置给了shader。当然后面的OpenGL操作也有细微的变化,没有再使用正交投影,画的四边形也更大了(原来是1x1的,现在是2x2的):

if (currentColor.a > 0 &&fadeMaterial)

{

       fadeMaterial.SetColor(fadeMaterialColorID,currentColor);

       fadeMaterial.SetPass(0);

       GL.Begin(GL.QUADS);

       GL.Vertex3(-1,-1, 0);

       GL.Vertex3(1, -1, 0);

       GL.Vertex3(1,1, 0);

       GL.Vertex3(-1,1, 0);

       GL.End();

}

12.4.3.             SteamVR_GameView.cs

这个脚本是用于渲染伴随窗口的,从v1.1.0的分析可知,伴随窗口之所以能看到头显里面看到的内容,是因为在SteamVR_GameView里面将要显示到头显里面的纹理直接绘制到伴随窗口了。在v1.1.0里面这个纹理取的是SteamVR_Camera._scenceTexture,而在v1.1.1里面,如果可能的话(底层使用的是Direct3D),取的是所谓的镜像纹理。

 

首先定义了一个静态变量保存这个镜像纹理:

static Texture2D mirrorTexture;

 

然后在OnEnable里创建了这个纹理:

if (mirrorTexture == null)

{

       var vr= SteamVR.instance;

       if (vr!= null && vr.graphicsAPI == EGraphicsAPIConvention.API_DirectX)

       {

        只在DirectX模式下才能使用。注意下面获取DirectX纹理的方法,在下面的                 SteamVR_TrackedCamera中也有使用。它先创建了一个2x2Unity纹理,然后用            这个纹理的底层纹理指针去获取DirectX纹理

              vartex = new Texture2D(2, 2);

              varnativeTex = System.IntPtr.Zero;

        核心是调用了IVRCompositor.GetMirrorTextureD3D11函数。在openvr.h中这个函           数的注释为“为每只眼睛打开一个无变形的合成图像的共享D3D11纹理”。联想到           前面已经提到了,在PC上的SteamVR的菜单里,有一个“显示器映射(Headset             Mirror)”的菜单项,可以显示头显里看到的左右眼的图像。而这里也叫Mirror          所以可以想见的是这里获得的就是头显里面内容的镜像纹理。但这样做有什么好处           呢?是因为这个纹理是共享的,所以性能会更好吗?性能倒未必,感觉之前的做法           是拿到提交到硬件前的纹理,很可能已经经过变形处理了(虽然在Vive上是未经              变形的)

              if(vr.compositor.GetMirrorTextureD3D11(EVREye.Eye_Right,tex.GetNativeTexturePtr(), ref nativeTex) == EVRCompositorError.None)

              {

                     uintwidth = 0, height = 0;

                     OpenVR.System.GetRecommendedRenderTargetSize(refwidth, ref height);

            根据底层DirectX纹理创建Unity纹理

                     mirrorTexture= Texture2D.CreateExternalTexture((int)width, (int)height,TextureFormat.RGBA32, false, false, nativeTex);

              }

       }

}

 

然后在OnPostRender里面绘制的时候,如果镜像纹理可用,就使用镜像纹理绘制了:

if (mirrorTexture !=null)

       blitMaterial.mainTexture= mirrorTexture;

else

       blitMaterial.mainTexture= SteamVR_Camera.GetSceneTexture(camera.hdr);

12.4.4.             SteamVR_LoadLevel.cs

这个脚本是用于在进行场景切换时进行渐变处理的。在v1.1.1中所做的改动就是修改了一些变量的名字。因为这个脚本除了支持在同一个游戏中进行场景切换外,还支持切换到其它的app(应用、exe)。这种appv1.1.0中称为外部app,而在v1.1.1中称为内部app。我想之所以改称为内部app,大概是因为这些app都是在Steam商店或者Vive商店内部的。

 

改动很简单,就是将原来的externalAppPathexternalAppArgs改名为internalProcessPathinternalProcessArgs

12.4.5.             SteamVR_PlayArea.cs

游玩区的显示改了两个地方,一是改了游玩区网格的方向,二是改了渲染材质的获取方法。

 

游玩区网格的三角形顶点定义由原来的逆时针改成了顺时针:

var triangles = new int[]

{

       0, 4,1,

       1, 4,5,

       1, 5,2,

       2, 5,6,

       2, 6,3,

       3, 6,7,

       3, 7,0,

       0, 7,4

};

 

生成的网格仍然是这样的:

 

对材质的获取,原先是这样获取的:

#if UNITY_EDITOR && !(UNITY_5_3 ||UNITY_5_2 || UNITY_5_1 || UNITY_5_0)

       renderer.material= UnityEditor.AssetDatabase.GetBuiltinExtraResource("Sprites-Default.mat");

#else

       renderer.material=Resources.GetBuiltinResource("Sprites-Default.mat");

#endif

也就是直接去取资源中的材质文件,这个可能在5.4中已经没有了,故改成了下面这样的方式:

renderer.material = new Material(Shader.Find("Sprites/Default"));

根据Sprites/DefaultShader来创建一个材质

12.4.6.             SteamVR_Render.cs

在这个脚本中主要是增加了对截屏的支持,之前在SteamVR_SkyBoxEditor里面有自己实现的截屏方法,现在可以直接使用新的方法了。v1.1.0版本对应的OpenVR SDK的版本(可能)是v0.9.2,而v1.1.1对应SDK版本的是1.0.2。这其中增加了很多接口,除了上面说的IVRTrackedCamera外,还有一个IVRScreenShots接口。

 

新增函数,用于获取截屏文件路径(不带扩展名)

private string GetScreenshotFilename(uintscreenshotHandle, EVRScreenshotPropertyFilenames screenshotPropertyFilename)

{

       varerror = EVRScreenshotError.None;

    先传入空参数以获取返回的字符串长度

       varcapacity = OpenVR.Screenshots.GetScreenshotPropertyFilename(screenshotHandle,screenshotPropertyFilename, null, 0, ref error);

       if(error != EVRScreenshotError.None && error !=EVRScreenshotError.BufferTooSmall )

          return null;

       if(capacity > 1)

       {

        然后再次获取实际的字符串

          var result = new System.Text.StringBuilder((int)capacity);

          OpenVR.Screenshots.GetScreenshotPropertyFilename(screenshotHandle,screenshotPropertyFilename, result, capacity, ref error);

          if (error != EVRScreenshotError.None)

              returnnull;

          return result.ToString();

       }

       returnnull;

}

 

新增函数,用于拦截截屏,即当用户按下截屏组合键(扳机+系统键),就会通知到这里

private void OnRequestScreenshot(paramsobject[] args)

{

       varvrEvent = (VREvent_t)args[0];

   vrEvent.data.screenshot的数据类型为VREvent_Screenshot_t

       varscreenshotHandle = vrEvent.data.screenshot.handle;

       varscreenshotType = (EVRScreenshotType)vrEvent.data.screenshot.type;

 

       if (screenshotType == EVRScreenshotType.StereoPanorama )

       {

        这里只会请求立体全景截屏。截图结果包括预览图和实际图

          string previewFilename = GetScreenshotFilename(screenshotHandle,EVRScreenshotPropertyFilenames.Preview);

          string VRFilename = GetScreenshotFilename(screenshotHandle,EVRScreenshotPropertyFilenames.VR);

 

          if (previewFilename == null || VRFilename == null)

              return;

 

        下面开始截屏。这里比较奇葩,仍然是使用的自己的方式截屏,而没有利用                  IVRScreenShots的接口来截屏。当然IVRScreenShots的截屏接口也是有局限的,              比如它不会有通知反馈,而这里既然要拦截截屏,那就得自己来实现。那设计就有           问题嘛,为什么不既由系统来截屏,又有通知?

          // Do the stereo panorama screenshot

          // Figure out where the view is

        最终还是通过手动渲染相机的方式来截屏。调用的是                                                     SteamVR_Utils.TakeStereoScreenshot来截屏,它要求有一个带相机的游戏物体。

          GameObject screenshotPosition = new GameObject("screenshotPosition");

        将这个临时创建的物体放到顶层相机的位置

          screenshotPosition.transform.position =SteamVR_Render.Top().transform.position;

          screenshotPosition.transform.rotation = SteamVR_Render.Top().transform.rotation;

          screenshotPosition.transform.localScale =SteamVR_Render.Top().transform.lossyScale;

        调用SteamVR_Utils.TakeStereoScreenshot来截屏

           SteamVR_Utils.TakeStereoScreenshot(screenshotHandle,screenshotPosition, 32, 0.064f, ref previewFilename, ref VRFilename);

 

          // and submit it

        提交截屏的作用是把截图添加到SteamVR的截图库中

          OpenVR.Screenshots.SubmitScreenshot(screenshotHandle, screenshotType,previewFilename, VRFilename);

       }

}

 

OnEnable不如对截屏进行拦截:

监听"RequestScreenshot"事件,这种采用首字母大写的事件都是系统发出来的

SteamVR_Utils.Event.Listen("RequestScreenshot",OnRequestScreenshot);

var vr = SteamVR.instance;

if (vr == null)

{

    enabled = false;

    return;

}

var types = new EVRScreenshotType[] {EVRScreenshotType.StereoPanorama };

拦截系统的截屏,这样才会有上面的通知

OpenVR.Screenshots.HookScreenshot(types);

 

SteamVR_Render里面还有一处小改动,就是如果选中了“Lock PhysicsUpdate”,在v1.1.0中会根据头显的实际帧率计算Time.fixedDeltaTimeTime.maximumDeltaTime,而在v1.1.1中不再计算Time.maximumDeltaTime

12.4.7.             SteamVR_TrackedCamera.cs

这个是新增的脚本,用于操作头显上的前置摄像头。在SteamVR系统中也有使用前置摄像头,可以在SteamVR设置里面设置:

如上设置后,按菜单键显示控制面板后,会跟踪手柄显示一个小的前置摄像头实时拍摄到的画面,同时在游戏场景中的任何时候双击系统键,也会以类似灰度图像的效果全屏显示前置摄像头拍摄到的画面。

 

脚本的最开头有比较详细的说明,这里翻译一下:

用途:用于访问跟踪相机的视频流及位置

用法:

      var source =SteamVR_TrackedCamera.Distorted();

      var source =SteamVR_TrackedCamera.Undistorted();

或者:

      varundistorted = true; // or false

      var source =SteamVR_TrackedCamera.Source(undistorted);

- 变形后的视频为从相机解码后的图像

- 未变形的视频对相机的镜头(即常说的鱼眼镜头)变形进行校正,让直线看起来是直线。

VideoStreamTexture对象的AquiredReleased必须成对使用以确保视频流在没有消费者后能正常关闭。你只需要在开始使用视频流时Acquire一次,在用完后Release,而不是每帧都Acquire/Release

 

它是一个纯C#类,不能添加到Unity对象上,关于它的使用可以参考SteamVR_TestTrackedCamera

public class SteamVR_TrackedCamera

{

    内部类,对单一(跟踪设备)索引的视频流纹理进行封装

       publicclass VideoStreamTexture

       {

        构造函数,传入索引及是否进行校正

              publicVideoStreamTexture(uint deviceIndex, bool undistorted)

              {

                     this.undistorted= undistorted;

            根据索引创建流

                     videostream= Stream(deviceIndex);

              }

              publicbool undistorted { get; private set; }

              publicuint deviceIndex { get { return videostream.deviceIndex; } }

        是否有前置相机(或者说前置相机是否可用)

              publicbool hasCamera { get { return videostream.hasCamera; } }

        取到的视频流中是否带位置跟踪信息

              publicbool hasTracking { get { Update(); returnheader.standingTrackedDevicePose.bPoseIsValid; } }

              publicuint frameId { get { Update(); return header.nFrameSequence; } }

        应该是变形校正后正常的区域(去掉坏区)

              publicVRTextureBounds_t frameBounds { get; private set; }

        帧类型,变形是否经过校正

              publicEVRTrackedCameraFrameType frameType { get { return undistorted ?EVRTrackedCameraFrameType.Undistorted : EVRTrackedCameraFrameType.Distorted; }}

 

       Unity纹理,用于转换底层纹理

              Texture2D_texture;

              publicTexture2D texture { get { Update(); return _texture; } }

        将底层的姿态信息转换成RigidTransform,这里面保存了拍摄视频相对于头显的位       

              publicSteamVR_Utils.RigidTransform transform { get { Update(); return newSteamVR_Utils.RigidTransform(header.standingTrackedDevicePose.mDeviceToAbsoluteTracking);} }

        瞬间速度

              publicVector3 velocity { get { Update(); var pose = header.standingTrackedDevicePose;return new Vector3(pose.vVelocity.v0, pose.vVelocity.v1, -pose.vVelocity.v2); }}

        瞬间角速度

              publicVector3 angularVelocity { get { Update(); var pose =header.standingTrackedDevicePose; return new Vector3(-pose.vAngularVelocity.v0,-pose.vAngularVelocity.v1, pose.vAngularVelocity.v2); } }

获取原始中的姿态信息

              publicTrackedDevicePose_t GetPose() { Update(); returnheader.standingTrackedDevicePose; }

 

        获取视频流

              publiculong Acquire()

              {

                     returnvideostream.Acquire();

              }

        释放视频流

              publiculong Release()

              {

                     varresult = videostream.Release();

 

                     if(videostream.handle == 0)

                     {

                如果底层也被释放了,则清空纹理

                            Object.Destroy(_texture);

                            _texture= null;

                     }

 

                     returnresult;

              }

 

        获取最新的视频图像帧及拍摄姿态

              intprevFrameCount = -1;

              voidUpdate()

              {

           调用太快了,在同一帧调用的

                     if(Time.frameCount == prevFrameCount)

                            return;

 

                     prevFrameCount= Time.frameCount;

 

            底层句柄无效(未能获取到底层视频流)

                     if(videostream.handle == 0)

                            return;

 

                     varvr = SteamVR.instance;

                     if(vr == null)

                            return;

 

            获取IVRTrackedCamera接口对象

                     vartrackedCamera = OpenVR.TrackedCamera;

                     if(trackedCamera == null)

                            return;

 

                     varnativeTex = System.IntPtr.Zero;

            如果纹理尚未创建,先使用一个2x2的临时纹理,后面会重新创建

                     vardeviceTexture = (_texture != null) ? _texture : new Texture2D(2, 2);

            获取CameraVideoStreamFrameHeader_t结构体大小,类似于C/C++里面的               sizeof,注意这里C#的做法

                     varheaderSize =(uint)System.Runtime.InteropServices.Marshal.SizeOf(header.GetType());

 

                     if(vr.graphicsAPI == EGraphicsAPIConvention.API_OpenGL)

                     {

                            if(glTextureId != 0)

                   OpenGL的纹理id每次都要释放重新获取吗?

                                   trackedCamera.ReleaseVideoStreamTextureGL(videostream.handle,glTextureId);

 

                获取当前的视步帧数据,包括底层的OpenGL纹理id

                            if(trackedCamera.GetVideoStreamTextureGL(videostream.handle, frameType, refglTextureId, ref header, headerSize) != EVRTrackedCameraError.None)

                                   return;

 

                            nativeTex= (System.IntPtr)glTextureId;

           }

                     else

                     {

               Direct3D的情况

                            if(trackedCamera.GetVideoStreamTextureD3D11(videostream.handle, frameType,deviceTexture.GetNativeTexturePtr(), ref nativeTex, ref header, headerSize) !=EVRTrackedCameraError.None)

                                   return;

                     }

 

                     if(_texture == null)

                     {

                如果Unity纹理尚未创建,创建之(根据纹理大小及底层纹理id

                            _texture= Texture2D.CreateExternalTexture((int)header.nWidth, (int)header.nHeight,TextureFormat.RGBA32, false, false, nativeTex);

 

                转换变形校正后的正常纹理范围

                            uintwidth = 0, height = 0;

                            varframeBounds = new VRTextureBounds_t();

                            if(trackedCamera.GetVideoStreamTextureSize(deviceIndex, frameType, refframeBounds, ref width, ref height) == EVRTrackedCameraError.None)

                            {

                                   //Account for textures being upside-down in Unity.

                   Unity的纹理坐标是从上至小的

                                   frameBounds.vMin= 1.0f - frameBounds.vMin;

                                   frameBounds.vMax= 1.0f - frameBounds.vMax;

                                   this.frameBounds= frameBounds;

                            }

                     }

                     else

                     {

                从底层纹理更新数据到Unity纹理,看样子这种Unity纹理称为外部纹理

                            _texture.UpdateExternalTexture(nativeTex);

                     }

              }

 

       OpenGL的纹理id

              uintglTextureId;

        一条视频流的封装

              VideoStreamvideostream;

        一帧视频图像数据结构

              CameraVideoStreamFrameHeader_theader;

       }

 

       #regionTop level accessors.

 

    获取指定索引的未校正过变形的视频流纹理

       publicstatic VideoStreamTexture Distorted(int deviceIndex = (int)OpenVR.k_unTrackedDeviceIndex_Hmd)

       {

              if(distorted == null)

            为总共16个跟踪设备都预留了视频流纹理。目前只有头显上有摄像头

                     distorted= new VideoStreamTexture[OpenVR.k_unMaxTrackedDeviceCount];

              if(distorted[deviceIndex] == null)

                     distorted[deviceIndex]= new VideoStreamTexture((uint)deviceIndex, false);

              returndistorted[deviceIndex];

       }

 

    获取指定索引的校正变形后的视频流纹理

       publicstatic VideoStreamTexture Undistorted(int deviceIndex =(int)OpenVR.k_unTrackedDeviceIndex_Hmd)

       {

              if(undistorted == null)

                     undistorted= new VideoStreamTexture[OpenVR.k_unMaxTrackedDeviceCount];

              if(undistorted[deviceIndex] == null)

                     undistorted[deviceIndex]= new VideoStreamTexture((uint)deviceIndex, true);

              returnundistorted[deviceIndex];

       }

 

    进一步封装,根据是否校正变形类型获取视频流纹理

       publicstatic VideoStreamTexture Source(bool undistorted, int deviceIndex =(int)OpenVR.k_unTrackedDeviceIndex_Hmd)

       {

              returnundistorted ? Undistorted(deviceIndex) : Distorted(deviceIndex);

    }

 

    分别保存未经校正以及校正后的视频流纹理数组(为所有跟踪设备预留空间)

       privatestatic VideoStreamTexture[] distorted, undistorted;

 

       #endregion

 

       #regionInternal class to manage lifetime of video streams (per device).

    单一视频流管理类

       classVideoStream

       {

        根据索引创建视频流

              publicVideoStream(uint deviceIndex)

              {

                     this.deviceIndex= deviceIndex;

                     vartrackedCamera = OpenVR.TrackedCamera;

                     if(trackedCamera != null)

                判断指定索引的跟踪设备上是否有相机

                            trackedCamera.HasCamera(deviceIndex,ref _hasCamera);

              }

              publicuint deviceIndex { get; private set; }

 

        底层的视频流句柄

              ulong_handle;

              publiculong handle { get { return _handle; } }

 

        指定索引跟踪设备上是否有相机

              bool_hasCamera;

              publicbool hasCamera { get { return _hasCamera; } }

 

        使用引用计数来管理视频流的生命周期。一条视频流可以被多次引用,用到多个地             

              ulongrefCount;

              publiculong Acquire()

              {

                     if(_handle == 0 && hasCamera)

                     {

                如果尚未获取底层视频流,在这里获取

                            vartrackedCamera = OpenVR.TrackedCamera;

                            if(trackedCamera != null)

                                   trackedCamera.AcquireVideoStreamingService(deviceIndex, ref_handle);

                     }

                     return++refCount;

              }

        使用者不再使用一条视频流时应该及时释放

              publiculong Release()

              {

                     if(refCount > 0 && --refCount == 0 && _handle != 0)

                     {

                当引用计数为0时,释放底层视频流

                            vartrackedCamera = OpenVR.TrackedCamera;

                            if(trackedCamera != null)

                                   trackedCamera.ReleaseVideoStreamingService(_handle);

                            _handle= 0;

                     }

                     returnrefCount;

              }

       }

 

    一个静态方法,辅助创建指定索引的视频流

       staticVideoStream Stream(uint deviceIndex)

       {

              if(videostreams == null)

            也为每个跟踪设备预留了视频流对象空间

                     videostreams= new VideoStream[OpenVR.k_unMaxTrackedDeviceCount];

              if(videostreams[deviceIndex] == null)

                     videostreams[deviceIndex]= new VideoStream(deviceIndex);

              returnvideostreams[deviceIndex];

       }

 

    缓存了所有跟踪设备索引的视频流对象,目前只在有头显上有

       staticVideoStream[] videostreams;

 

       #endregion

}

12.4.8.             SteamVR_Utils.cs

这个脚本中的改动就是新增了一个截屏的静态函数。其实IVRScreenShots接口也有TakeStereoScreenshot函数,这里完全是自己的实现,实际做法与SteamVR_SkyboxEditor中的实现是一样的

 

调用者需要传入一个带target,生成的截屏就是以target为视角的截屏

public static voidTakeStereoScreenshot(uint screenshotHandle, GameObject target, int cellSize,float ipd, ref string previewFilename, ref string VRFilename)

{

    截图的大小为4096x2048

       constint width = 4096;

       constint height = width / 2;

       constint halfHeight = height / 2;

 

    最终生成的纹理仍然是4096x4096的,因为左右眼按上下排列在一起

       vartexture = new Texture2D(width, height * 2, TextureFormat.ARGB32, false);

 

    用于计算花费的时间

       vartimer = new System.Diagnostics.Stopwatch();

 

       CameratempCamera = null;

 

    开始计时

       timer.Start();

 

    要求传入的target上带Camera,如果不带,自动添加一个临时的

       varcamera = target.GetComponent();

       if(camera == null)

       {

              if(tempCamera == null)

                     tempCamera= new GameObject().AddComponent();

              camera= tempCamera;

       }

 

    先生成预览纹理。预览纹理大小为2048x2048

       //Render preview texture

       constint previewWidth = 2048;

       constint previewHeight = 2048;

       varpreviewTexture = new Texture2D(previewWidth, previewHeight,TextureFormat.ARGB32, false);

       vartargetPreviewTexture = new RenderTexture(previewWidth, previewHeight, 24);

 

    临时保存相机参数,后面恢复

       varoldTargetTexture = camera.targetTexture;

       varoldOrthographic = camera.orthographic;

       varoldFieldOfView = camera.fieldOfView;

       varoldAspect = camera.aspect;

#if !(UNITY_5_3 || UNITY_5_2 || UNITY_5_1|| UNITY_5_0)

       varoldstereoTargetEye = camera.stereoTargetEye;

       camera.stereoTargetEye= StereoTargetEyeMask.None;

#endif

       camera.fieldOfView= 60.0f;

       camera.orthographic= false;

    直接渲染到targetPreivewTexture

       camera.targetTexture= targetPreviewTexture;

       camera.aspect= 1.0f;

       camera.Render();

 

       //copy preview texture

    将相机的渲染纹理转换成普通的纹理

       RenderTexture.active= targetPreviewTexture;

    使用ReadPixels方法转换

       previewTexture.ReadPixels(newRect(0, 0, targetPreviewTexture.width, targetPreviewTexture.height), 0, 0);

       RenderTexture.active= null;

       camera.targetTexture= null;

       Object.DestroyImmediate(targetPreviewTexture);

 

    下面生成左右眼两幅立体全景图,采用球形投影(每只眼睛所能看到的1/4半球)。与   前面的SteamVR_SkyboxEditor中的做法是差不多的,涉及到太多的运算,后面再看   TODO

       var fx= camera.gameObject.AddComponent();

 

       varoldPosition = target.transform.localPosition;

       varoldRotation = target.transform.localRotation;

       varbasePosition = target.transform.position;

       varbaseRotation = Quaternion.Euler(0, target.transform.rotation.eulerAngles.y, 0);

 

       vartransform = camera.transform;

 

       intvTotal = halfHeight / cellSize;

       floatdv = 90.0f / vTotal; // vertical degrees per segment

       floatdvHalf = dv / 2.0f;

 

       vartargetTexture = new RenderTexture(cellSize, cellSize, 24);

       targetTexture.wrapMode= TextureWrapMode.Clamp;

       targetTexture.antiAliasing= 8;

 

       camera.fieldOfView= dv;

       camera.orthographic= false;

       camera.targetTexture= targetTexture;

       camera.aspect= oldAspect;

#if !(UNITY_5_3 || UNITY_5_2 || UNITY_5_1|| UNITY_5_0)

       camera.stereoTargetEye= StereoTargetEyeMask.None;

#endif

 

       //Render sections of a sphere using a rectilinear projection

       // andresample using a sphereical projection into a single panorama

       //texture per eye.  We break into sections in order to keep the eye

       //separation similar around the sphere.  Rendering alternates between

       // topand bottom sections, sweeping horizontally around the sphere,

       //alternating left and right eyes.

       for(int v = 0; v < vTotal; v++)

       {

              varpitch = 90.0f - (v * dv) - dvHalf;

              varuTotal = width / targetTexture.width;

              vardu = 360.0f / uTotal; // horizontal degrees per segment

              varduHalf = du / 2.0f;

 

              varvTarget = v * halfHeight / vTotal;

 

              for(int i = 0; i < 2; i++) // top, bottom

              {

                     if(i == 1)

                     {

                            pitch= -pitch;

                            vTarget= height - vTarget - cellSize;

                     }

 

                     for(int u = 0; u < uTotal; u++)

                     {

                            varyaw = -180.0f + (u * du) + duHalf;

 

                            varuTarget = u * width / uTotal;

 

                            varvTargetOffset = 0;

                            varxOffset = -ipd / 2 * Mathf.Cos(pitch * Mathf.Deg2Rad);

 

                            for(int j = 0; j < 2; j++) // left, right

                            {

                                   if(j == 1)

                                   {

                                          vTargetOffset= height;

                                          xOffset= -xOffset;

                                   }

 

                                   varoffset = baseRotation * Quaternion.Euler(0, yaw, 0) * new Vector3(xOffset, 0,0);

                                   transform.position= basePosition + offset;

 

                                   vardirection = Quaternion.Euler(pitch, yaw, 0.0f);

                                   transform.rotation= baseRotation * direction;

 

                                   //vector pointing to center of this section

                                   varN = direction * Vector3.forward;

 

                                   //horizontal span of this section in degrees

                                   varphi0 = yaw - (du / 2);

                                   varphi1 = phi0 + du;

 

                                   //vertical span of this section in degrees

                                   vartheta0 = pitch + (dv / 2);

                                   vartheta1 = theta0 - dv;

 

                                   varmidPhi = (phi0 + phi1) / 2;

                                   varbaseTheta = Mathf.Abs(theta0) < Mathf.Abs(theta1) ? theta0 : theta1;

 

                                   //vectors pointing to corners of image closes to the equator

                                   varV00 = Quaternion.Euler(baseTheta, phi0, 0.0f) * Vector3.forward;

                                   varV01 = Quaternion.Euler(baseTheta, phi1, 0.0f) * Vector3.forward;

 

                                   //vectors pointing to top and bottom midsection of image

                                   varV0M = Quaternion.Euler(theta0, midPhi, 0.0f) * Vector3.forward;

                                   varV1M = Quaternion.Euler(theta1, midPhi, 0.0f) * Vector3.forward;

 

                                   //intersection points for each of the above

                                   varP00 = V00 / Vector3.Dot(V00, N);

                                   varP01 = V01 / Vector3.Dot(V01, N);

                                   varP0M = V0M / Vector3.Dot(V0M, N);

                                   varP1M = V1M / Vector3.Dot(V1M, N);

 

                                   //calculate basis vectors for plane

                                   varP00_P01 = P01 - P00;

                                   varP0M_P1M = P1M - P0M;

 

                                   varuMag = P00_P01.magnitude;

                                   varvMag = P0M_P1M.magnitude;

 

                                   varuScale = 1.0f / uMag;

                                   varvScale = 1.0f / vMag;

 

                                   varuAxis = P00_P01 * uScale;

                                   varvAxis = P0M_P1M * vScale;

 

                                   //update material constant buffer

                                   fx.Set(N,phi0, phi1, theta0, theta1,

                                          uAxis,P00, uScale,

                                          vAxis,P0M, vScale);

 

                                   camera.aspect= uMag / vMag;

                                   camera.Render();

 

                                   RenderTexture.active= targetTexture;

                                   texture.ReadPixels(newRect(0, 0, targetTexture.width, targetTexture.height), uTarget, vTarget +vTargetOffset);

                                   RenderTexture.active=null;                

                            }

 

                            //Update progress

                            varprogress = (float)( v * ( uTotal * 2.0f ) + u + i*uTotal) / (float)(vTotal * (uTotal * 2.0f ) );

                这里还可以反馈截屏进度,会以overlay的方式显示一个进度条

                            OpenVR.Screenshots.UpdateScreenshotProgress(screenshotHandle,progress);

                     }

              }

       }

 

       //100% flush

       OpenVR.Screenshots.UpdateScreenshotProgress(screenshotHandle,1.0f);

 

       //Save textures to disk.

       // Addextensions

       previewFilename+= ".png";

       VRFilename+= ".png";

 

       //Preview

       previewTexture.Apply();

    png图片保存到本地

       System.IO.File.WriteAllBytes(previewFilename,previewTexture.EncodeToPNG());

 

       // VR

       texture.Apply();

       System.IO.File.WriteAllBytes(VRFilename,texture.EncodeToPNG());

 

       //Cleanup.

       if(camera != tempCamera)

       {

        还原原始参数

              camera.targetTexture= oldTargetTexture;

              camera.orthographic= oldOrthographic;

              camera.fieldOfView= oldFieldOfView;

              camera.aspect= oldAspect;

#if !(UNITY_5_3 || UNITY_5_2 || UNITY_5_1|| UNITY_5_0)

              camera.stereoTargetEye= oldstereoTargetEye;

#endif

 

              target.transform.localPosition= oldPosition;

              target.transform.localRotation= oldRotation;

       }

       else

       {

              tempCamera.targetTexture= null;

       }

 

       Object.DestroyImmediate(targetTexture);

       Object.DestroyImmediate(fx);

 

    打印耗时

       timer.Stop();

       Debug.Log(string.Format("Screenshottook {0} seconds.", timer.Elapsed));

 

       if(tempCamera != null)

       {

              Object.DestroyImmediate(tempCamera.gameObject);

       }

 

       Object.DestroyImmediate(previewTexture);

       Object.DestroyImmediate(texture);

    }

}

如社区发表内容存在侵权行为,您可以点击这里查看侵权投诉指引