DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
Applicant’s submission of a response on 9/04/25 has been received and considered. In the response, Applicant amended claims 1, 8 and 15. Therefore, claims 1-20 are pending.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action.
Claim(s) 1, 5-8, 12-15, 19 and 20 is/are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Bi et al. (pub. no. 20120236002).
Regarding claim 1, Bi discloses a computer-implemented method comprising: receiving rendering parameters provided as part of a modified OpenGL pipeline in an interception layer to an original application; wherein the interception layer is located at a graphics layer (“In one example, a method of converting non-stereoscopic 3D content to 3D content includes intercepting an application program interface (API) call; determining from the API call, a model view projection matrix for the non-stereoscopic 3D content; modifying the model view projection matrix to generate a modified model view projection matrix; based on the modified model view projection matrix, generating left view clipping coordinates; based on the modified model view projection matrix, generating right view clipping coordinates; based on the left view clipping coordinates, generating a left view; based on the right view clipping coordinates, generating a right view; and, based on the left view and the right view, rendering an 3D image”, [0007];
“To generate 3D content from native 3D content, the techniques of this disclosure include techniques for 3D-to-3D conversion. The techniques include software for intercepting selected API calls from a 3D application to a 3D graphics API. The intercepted API calls can be modified in a manner that causes a system for generating graphics to create binocular views that can be displayed on a 3D display in run time, thus creating a 3D effect for users viewing the rendered content. Two images, a left-eye image and a right-eye image, can be rendered by the same graphics pipeline, based on an analysis of the native 3D content. The two images can be generated using different setups for viewing locations and/or directions. A system implementing the techniques of this disclosure can be located right above a graphics API, such as an OpenGL ES API used in mobile devices, enabling API calls from a 3D graphics application to be intercepted. In some implementations, the 3D-to-3D conversion system can be implemented only using software and without requiring changes to GPU hardware, graphics driver code, or to 3D graphics application content. The techniques of this disclosure can be applied with OpenGL, OpenGL ES, and other graphics APIs”, [0024];
“In one or more example techniques described in this disclosure, graphics driver wrapper 216, which may be software executing on application processor 212, may modify API calls that define the clipping coordinates for the mono view to define clipping coordinates for the stereoscopic view (e.g., clipping coordinates for the left-eye image and clipping coordinates for the right-eye image). Also, graphics driver wrapper 216, in addition to modifying API calls that define the clipping coordinates, may modify API calls that define the viewport for the mono view to define viewports for the stereoscopic view. For example, application 232 may define the size and location of the single image (e.g., mono view) on the display that displays the image. Graphics driver wrapper 216 may modify API calls that define the size and location of the single image to modified API calls that define the size and location of the left-eye image and the right-eye image (e.g., instructions for the viewport for the left-eye image and the viewport for the right-eye image). In this manner, graphics driver wrapper 216 may intercept a single API call for a mono view and generate modified API calls for both a left view and a right view”, [0054]);
generating an application call request to an effect loader and an effect shader; in response to the application call request, receiving an application call response for a left output and a right output; and transmitting the left output and the right output to an OpenGL application programming interface (API) to create a three-dimensional (3D) rendered image in the original application (”When 3D-to-3D conversion is enabled (see. e.g. 120, yes), 3D-to-3D conversion module 124 can intercept API calls of graphics content 112 and modify the API calls in a manner that will cause them to produce a left-eye view and a right-eye view as opposed to a single mono view. The modified API calls produced by 3D-to-3D conversion module 124 can then be taken by API 128 to cause a GPU to render both a left-eye image and a right-eye image. The modified API calls can be executed by vertex processing module 132, left binning unit 136, right binning unit 140, and pixel processing module 144 to produce a left-eye image to be stored in left frame buffer 148 and a right-eye image to be stored in right frame buffer 152. In the example of system 100, 3D-to-3D conversion module 124 represents an application running on an application processor that is configured to intercept API calls and perform modifications to those API calls. The modifications to the API calls enabled 3D graphics content to be rendered as 3D graphics content by a GPU”, [0029]).
Regarding claim 5, Bi discloses the original application is a two-dimensional (2D) game (“As an example, application processor 212 may execute one or more applications, such as application 232, stored in system memory 226. Examples of application 232 include, but are not limited to, web browsers, e-mail applications, spreadsheets, video games, or other applications that generate viewable objects for display. For instance, application 232 may be a video game that when executed outputs 3D graphical content that is displayed on a display”, [0041];
“Application 232 may be designed by a developer for mono view. For example, application 232, upon execution, may generate 3D graphics content, where the 3D graphics content is constrained to the 2D area of the display”, [0042]).
Regarding claim 6, Bi discloses the effect loader comprises a 3D library of 3D objects (“In a typical 3D graphics pipelines, 3D graphics content is first in the form of primitive data that describes geometrics primitives. For both a left image and a right image, vertex processing unit 132 can generate a set of pixel locations in a 2D display plane based on the geometric primitive data. Left binning unit 136 can assemble geometric primitives associated with the left image on a tile-by-tile basis, where a tile corresponds to a portion of the left image. Similarly, right binning unit 140 can assemble geometric primitives associated with the right image on a tile-by-tile basis”, [0030]).
Regarding claim 7, Bi discloses the application call response is generated in consideration of a layout of the user’s eyes using a disparity measurement (“VTleft-eye and VTright-eye may be 4x4 view transformation matrices that are based on an assumed distance of the left eye and right eye away from the mono view. For example, if the coordinates of the mono view are assumed to be (0, 0, 0), then the left eye may be considered to be located at (-D, 0, 0), and the right eye may be considered to be located at (D, 0, 0). In other words, the (0, 0, 0) location may be considered as being in the middle of the right eye and the left eye of the viewer. If the left eye is considered to be located -D away from the middle of the right eye and the left eye, and right eye is considered to be located +D away from the middle of the right eye and the left eye, then D indicates half of the distance between the right eye and left eye of the viewer”, [0057]).
Claims 8 and 12-14 are directed to devices that implement the methods of claims 1 and 5-7 respectively and are rejected for the same reasons as claims 1 and 5-7 respectively.
Claims 15, 19 and 20 are directed to articles of manufacture containing code that implements the methods of claims 1, 5 and 6 respectively and are rejected for the same reasons as claims 1, 5 and 6 respectively.
Claim Rejections - 35 USC § 103
The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claim(s) 2-4, 9-11 and 16-18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Bi et al. (pub. no. 20120236002) in view of Neill et al. (pub. no. 20150213640).
Regarding claims 2 -4, it is noted that Bi does not disclose single and dual rendering modes. Neill however, teaches single and dual rendering modes (“A method for stereoscopically presenting visual content is disclosed. The method comprises identifying and distinguishing between a first type of content and a second type of content of a frame to be stereoscopically displayed. The method also comprises rendering the first type of content in a first left and a first right frame from a single perspective using a first stereoscopic rendering method. Further, the method comprises rendering the second type of content in a second left and a second right frame using a second, different stereoscopic method from two different perspectives. Additionally, the method comprises merging the first and second left frames and the first and second right frames to produce a resultant left frame and a resultant right frame. Finally, the method comprises displaying the resultant left frame and the resultant right frame for stereoscopic perception by a viewer”, abstract;
“Various methods exist for stereoscopically presenting visual content (so that users perceive a 3D visual effect). Each of these methods has associated tradeoffs. For example, one method might produce very high-quality visual effects with minimal artifacts, but at a high cost in terms of complexity and consumption of processing resources, e.g., full stereoscopic 3D vision implemented with draw calls to the right and left eyes. Another method might enable fast and efficient real-time processing, but cause eyestrain or produce undesirable artifacts when rendering specific types of content, e.g., addressing transparency issues with techniques like depth-image based rendering (DIBR). Still other methods might require use of complex, bulky photographic equipment in order to record separate visual channels (e.g., providing left and right eye perspectives).
Depth-image based rendering (DIBR) can be particularly advantageous in settings where high-speed rendering is desirable, for example in certain types of computer gaming applications. In typical DIBR methods, workflow employs a data structure in which pixel color data is augmented with depth information for each pixel. Depth can be specified in terms of various frames of reference--e.g., distance from a user's vantage point, distance from a light source, etc. DIBR excels in many respects and under various metrics, although DIBR methods break down when rendering certain types of content.
In particular, DIBR methods struggle in the face of occlusion, transparency, and depth-blended content. For example, effects like transparency are difficult to solve with DIBR since there typically is no depth information for blended pixels at the stage in the rendering pipeline at which DIBR is applied. This is particularly noticeable in HUD (heads-up display) elements, which are usually blended on top of the scene as a post-process. In video gaming, HUD elements is the method by which information is visually relayed to the player as part of a game's user interface. The HUD is frequently used to simultaneously display several pieces of information including the main character's health, items, and an indication of game progression. Because effects like occlusion and transparency are difficult to solve with DIBR, a HUD element in a typical video game utilizing DIBR may appear either shifted or skewed off-screen (because of occlusion issues) or overlaid on video game elements underneath it (because of transparency issues), thereby, obstructing and disrupting a user's perception of the HUD element”, [0003] – [0005];
“Accordingly, a need exists to minimize the effects of occlusion and transparency when using image rendering techniques such as DIBR. In one embodiment, to address the distortive effects of occlusion and transparency when performing DIBR, full stereoscopic 3D vision rendering techniques are used to generate the HUD elements in the scene using two different viewing perspectives, while regular DIBR is used to generate all the remaining elements in the scene. 3D vision rendering techniques duplicate draw calls to both the left and the right eyes and, therefore, the distortive effects of occlusion and transparency are avoided when creating the HUD elements for on-screen display. The results of the 3D vision and DIBR rendering methods are then combined to generate the resultant stereoscopic images for display”, [0006]);
Exemplary rationales that may support a conclusion of obviousness include use of known technique to improve similar devices (methods, or products) in the same way. Here both Bi and Neill are directed to stereoscopic rendering techniques. To include the single and dual rendering modes of Neill in the Bi invention would be to use a known technique to improve similar devices in the same way. Therefore, it would have been obvious to a person having ordinary skill in the art as of the effective filing date of the claimed invention to modify Bi to use the hybrid rendering technique of Neill. To do so would reduce computational requirements while preserving image quality.
Claims 9-11 are directed to devices that implement the methods of claims 2-4 respectively and are rejected for the same reasons as claims 2-4 respectively.
Claims 16-18 are directed to articles of manufacture containing code that implements the methods of claims 2-4 respectively and are rejected for the same reasons as claims 2-4 respectively.
Response to Arguments
Applicant’s arguments filed on September 4, 2025 have been fully considered but they are not entirely persuasive.
On pages 6-8, Applicant argues the amended claims overcome the prior art of record because Scallie does not disclose interception at the graphics layer. Examiner agrees. However, new prior art, Bi, discloses interception at the graphics layer as detailed above.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to LAWRENCE STEFAN GALKA whose telephone number is (571)270-1386. The examiner can normally be reached M-F 6-9 & 12-5.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, David Lewis can be reached at 571-272-7673. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/LAWRENCE S GALKA/Primary Examiner, Art Unit 3715