DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
Applicant’s arguments filed 1/15/2026, with respect to claims 1-8 and 11-20 have been fully considered but are moot in view new ground(s) of rejection.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-8 and 11-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Finding et al. (PGPUB Document No. US 2018/0253900) in view of McLachlan et al. (PGPUB Document No. US 2023/0351704) in view of Rodriguez (PGPUB Document No. US 2022/0139050) in view of Grzesiak et al. (PGPUB Document No. US 2023/0171444).
Regarding claim 1, Finding teaches a method, the method adapted for use in displaying computer-generated content, the method comprising:
Communicating with an apparatus, the apparatus adapted to be coupled to and in communication with a local software application and a server (server communicating with AR devices (Finding: 0074));
Wherein the apparatus comprises:
Apparatus electronic circuitry and hardware including:
An apparatus processor (processor 206 (Finding: 0043, FIG.2));
An apparatus camera, the apparatus camera coupled to the apparatus processor (sensor 202 such as a camera (Finding: 0044, FIG.2));
An apparatus display, the apparatus display coupled to the apparatus processor (display 204 (Finding: 0043, FIG.2));
An apparatus memory, the apparatus memory coupled to the apparatus processor (storage device 226 (Finding: 0043, FIG.2));
An apparatus positioning device, the apparatus positioning device coupled to the apparatus processor (sensor 202 such as a location sensor (Finding: 0044, FIG.2));
An apparatus data transfer module, the apparatus data transfer module coupled to the apparatus processor (the required software for transferring data internally and/or externally);
An apparatus data transfer device, the apparatus data transfer device coupled to the apparatus processor (the required physical means for transferring data internally and/or externally);
Apparatus electronic software, the apparatus software including the local software application (software modules and/or application of the AR device (Finding: FIG.5, FIG.2, 0041)), and the apparatus electronic software being stored in the apparatus electronic circuitry and hardware and adapted to enable, drive, and control the apparatus electronic circuitry and hardware (the required software that enables the AR device);
An apparatus power supply connection, the apparatus power supply connection coupled to the apparatus electronic circuitry and hardware and couplable to an apparatus power supply (the required power supply that powers the AR device);
And an apparatus housing, the apparatus housing comprising an apparatus interior and an apparatus exterior housing, the apparatus interior containing the apparatus electronic circuitry and hardware, the apparatus software, and the apparatus power supply connection; and the apparatus exterior housing comprising an apparatus frame enclosing an apparatus optical lens assembly (the required housing of the types of devices listed by Finding (Finding: 0019));
Wherein the apparatus positioning device is adapted to generate positioning data indicative of at least one parameter of a group consisting of a position, a location, an orientation, a movement, and a point of view of the apparatus (location, position, orientation of the display device (e.g., mobile computing device, wearable computing device such as a head mounted device) (Finding: 0019));
Wherein the apparatus is adapted to transmit the positioning data to the server (content data and sensor data is sent to one or more servers (Finding: 0037, 0027));
Wherein the apparatus is adapted to receive the computer-generated content from the server (the server provides AR authoring template to the AR device (Finding: 0072));
Wherein the computer-generated content includes dynamic content changing over time and space in real-time as related events in reality occur (For example, a first virtual content may be displayed in the display device such that as the user moves around the environment, the mapping module re-renders the virtual content so that the virtual content is static with respect to a real world location and therefore dynamic within the display device and having a dynamic perspective rending (e.g. shrinking, growing, rotating in the display device as the user move away, closer and around). Furthermore, a second virtual content may be displayed in the display device such that as the user moves around the environment, the mapping module renders the virtual content at a user defined angle so that the virtual content is dynamic with respect to a real world location and therefore static within the display device and having a static perspective rendering (e.g. following the user) (Finding: 0088));
Wherein the computer-generated content comprises computer-generated content data encoding video (media content maybe video data (Finding: 0137), wherein transmitting media content between the server and AR device requires data for encoding said video data);
Wherein the computer-generated content and computer-generated content data are adapted to be generated by the server based on the positioning data (server 112 may generate playback content based on the AR content data, user profile data, and sensor data, wherein playback parameters include location information that is triggered when the user AR device is detected at the location information (Finding: 0037));
Wherein the computer-generated content is customized to the apparatus based on the computer-generated content data (dynamic perspective rendering (Finding: 0088) for the playback content at the location (Finding: 0037));
Wherein the computer-generated content is rendered and displayed on the apparatus display (the resulting content being rendered as disclosed in 0088 and 0037 of Finding);
Obtaining the positioning data of and generated by the apparatus (position data of the AR device (Finding: 0088, 0037, 0027) obtained by GPS sensors of the AR device (Finding: 0038));
Transmitting the positioning data from the apparatus to the server (At operation 606, the server 112 receives recorded content (and corresponding 3D coordinates) from the AR device 106 (Finding: 0085));
Receiving the computer-generated content at the apparatus from the server (the resulting content being rendered as disclosed in 0088 and 0037 of Finding);
And rendering and displaying the computer-generated content on the apparatus display.
However, Finding does not expressly teach but McLachlan teaches
Wherein the computer-generated content is customized to the apparatus based on the computer-generated content data being generated after, but nearly simultaneous to, generation of the positioning data (McLachlan teaches the concept of improving extended reality latency when overlaying XR objects (McLachlan: 0043). McLachlan discloses a method/system for improving latencies between 100ms-400ms. The Examiner construes such improvement to latencies result in the computer generated content of Finding to be generated “after, but nearly simultaneous to” sending position data to the server)
Wherein the computer-generated content is rendered and displayed on the apparatus display after, but nearly simultaneous to, generation of the computer-generated content by the server (Applying the teachings of McLachlan as stated above enables near instant rendering of the computer-generated content of Finding)
And wherein an occurrence of data generated, rendered, or displayed after, but nearly simultaneous to, generation of other data occurs within a latency not to exceed one second (McLachlan seeks to improve latencies of 100-400ms, which do not exceed one second (McLachlan: 0043));
Therefore, before the effective filing date of the claimed invention, it would have been obvious to one of an ordinary skill in the art to improve the overlay latency of Finding in the manner suggested by McLachlan, because this enables an improved AR experience.
Further, the combined teachings above do not expressly teach but Grzesiak teaches the improved latency being applied to latency between the server side computer-generated content and the transmission of said computer-generated content by said server (generating streaming data of image content by a server, the entire image content 400 may be down-scaled based on an artificial intelligence model to generate streaming data, and information about a downscaling ratio may be included in metadata… By using downscaling, a bandwidth from the server 100 to the electronic device 150 may be improved, and lag of the image content may be eliminated by reducing latency” (Grzesiak: 0073)).
Therefore, before the effective filing date of the claimed invention, it would have been obvious to one of an ordinary skill in the art to modify the combined teachings above such as to utilize the teachings of Grzesiak, because this enables a method of reducing latency (Grzesiak: 0073).
Further, the combined teachings do not expressly teach but Rodriguez teaches
Wherein the dynamic content is selected from a content group consisting of augmented reality content and virtual reality content (Rodriguez: 0020, 0013).
Therefore, before the effective filing date of the claimed invention, it would have been obvious to one of an ordinary skill in the art to modify the combined teachings above such that the computer generated content maybe selected in the manner taught by Rodriguez, because this enables an added variety of content to be experienced by the user.
Regarding claim 2, the combined teachings teach the method of claim 1, the method further comprising:
Communicating from the server to the apparatus (server communicating with AR devices (Finding: 0074));
Providing the server (server (Finding: 0074));
Generate the computer-generated content based on receiving the positioning data from the apparatus (dynamically rendering the virtual content to match the perspective based on where the user’s movement/location (Finding: 0088));
Receiving the positioning data at and by the server from the apparatus (content data and sensor data is sent to one or more servers (Finding: 0037, 0027));
Generating the computer-generated content at and by the server based on the positioning data (server 112 may generate playback content based on the AR content data, user profile data, and sensor data, wherein playback parameters include location information that is triggered when the user AR device is detected at the location information (Finding: 0037)).
However, the combined teachings do not expressly teach but Harding teaches the above steps being carried out at the server (Harding teaches the concept of the server updating the view perspective of the AR object based on the change in position of the AR device (Harding: 0192)).
Therefore, before the effective filing date of the claimed invention, it would have been obvious to one of an ordinary skill in the art to modify the combined teachings above such as to utilize the server as suggested by Harding, because this enables effectively offloading workloads.
Further, the combined teachings do not expressly teach but Rodriguez teaches wherein the server comprises: server electronic circuitry and hardware including: a server processor; a server memory, the server memory coupled to the server processor; a server data transfer module, the server data transfer module coupled to the server processor; a server data transfer device, the server data transfer device coupled to the server processor; server electronic software, the server software stored in the server electronic circuitry and hardware and adapted to enable, drive, and control the server electronic circuitry and hardware; and a server power supply connection, the server power supply connection coupled to the server electronic circuitry and hardware and couplable to a server power supply (Rodriguez: 0015).
The combined teachings teach above contained a device which differed the claimed process by disclosing a server lacking the details as presently claimed.
Rodriguez teaches the substituted step of disclosing a server with the details as claimed (Rodriguez: 0015).
The servers of the combined teachings above and Rodriguez were known in the art to effectively offloading resources, data, and services.
The server of the combined teachings above could have been substituted with the server of Rodriguez.
The results would have been predictable and resulted in equally carrying out the function of a server. Therefore, the claimed subject matter would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention.
Regarding claim 3, the combined teachings teach teaches the method of claim 1, the method further comprising:
Obtaining video data generated by the apparatus camera (Finding: 0049);
Combining the video data in a video data feed with the computer-generated content (overlaying the virtual object on an image of a physical object (Finding: 0019));
Overlaying the computer-generated content over the video data feed (virtual object overlaid on an image of a physical object (Finding: 0019));
And, displaying on the apparatus display a combination of the computer-generated content overlaid over the video data feed (the resulting AR generated in the AR device (Finding: 0019));
Wherein the computer-generated content comprises augmented reality content (Finding: 0018, 0019);
Wherein the augmented reality content corresponds to and augments the related events in reality occurring in real-time (by definition AR and as taught by McLachlan above, virtual content are displayed/overlaid in real-time);
Wherein the augmented reality content comprises an augmented reality overlay (Finding: 0019);
Wherein the augmented reality overlay comprises augmented reality overlay data encoding video adapted to be combined with and overlaid over video data generated by the apparatus camera after, but nearly simultaneous to, generation of the augmented reality overlay data by the server (by definition AR and as taught by McLachlan above, virtual content is displayed/overlaid in real-time, wherein the required step of overlay data encoding video is also done in real-time);
And wherein a combination of the augmented reality overlay and the video data comprises an augmented-reality-overlaid video encoded by augmented-reality-overlaid video data adapted to be rendered and displayed on the display (the resulting overlay data rendered on the AR device (Finding; 0019)).
Regarding claim 4, the combined teachings teach teaches the method of claim 1, the method further comprising:
Wirelessly transmitting the positioning data from the apparatus to the server (wireless network 110 communicating data be5tween the AR devices 106 and server 112 (Finding: 0042, FIG.1));
Receiving the positioning data at the server wirelessly transmitted from the apparatus (the position data of Finding (Finding: 0037, 0027) communicated through the wireless network of Finding);
Transmitting the computer-generated content from the server to the apparatus (the server provides AR authoring template to the AR device (Finding: 0072));
And wirelessly receiving the computer-generated content at the apparatus transmitted from the server (the AR device receiving content communicated through the wireless network of Finding);
Wherein the apparatus data transfer device comprises an apparatus wireless transceiver (the Examiner submits some form of wireless transceiver is required for receiving/transmitting data through the wireless network of Finding);
And, wherein the server data transfer device is in communication with a network wireless transceiver in wireless communication with the apparatus wireless transceiver (the AR device and server communicating through the wireless network of Finding).
Regarding claim 5, the combined teachings do not expressly teach but Rodriguez teaches the method of claim 1, the method further comprising:
Using an intermediate computing device to transmit the positioning data to the server; Using the intermediate computing device to receive the computer-generated content from the server; And using the intermediate computing device to process the computer-generated content for displaying the computer-generated content on the apparatus display; Wherein the electronic circuitry and hardware and the electronic software further comprise a console and the intermediate computing device; Wherein the console comprises the apparatus processor, the apparatus camera, the apparatus display, the apparatus memory, the apparatus positioning device, the apparatus data transfer module, the apparatus data transfer device, related aspects of the apparatus software, the apparatus housing, and the apparatus power supply connection; Wherein the console may be referred to as a viewer; Wherein the intermediate computing device comprises another processor, another memory, another data transfer module, another data transfer device, other aspects of the apparatus software, another housing, and another power supply connection; Wherein the intermediate computing device may be referred to as an auxiliary processing unit; Wherein the auxiliary processing unit is electronically couplable to the console; And, wherein the auxiliary processing unit is adapted to handle aspects of data transfer and data processing separately from the console in generating, transferring, and processing the computer-generated content (Rodriguez: claim 26, abstract).
Therefore, before the effective filing date of the claimed invention, it would have been obvious to one of an ordinary skill in the art to modify the combined teachings above such as to utilize a intermediate computing device as suggested by Rodriguez, because this enables effective processing of AR data.
Regarding claim 6, the combined teachings teach the method of claim 1, the method further comprising:
Using the apparatus to create and locally save content that defines a Moment (the shared content by a user (Finding: 0021-0024, 0033));
Using the apparatus to associate and locally save the positioning data with the content to create the Moment (placed content associated with real world coordinates (Finding: 0022));
Transmitting the Moment as user-created data to the server (The AR content is saved at the server (Finding: 0033));
Storing the Moment as user-created data in a database in communication with the server (database stores recorded contents (Finding: 0079));
And managing the Moment and the user-created data within the local software application and within a server software application (the server storing and sharing the content corresponds to managing the content (Moment)).
Regarding claim 7, the combined teachings teach the method of claim 6, the method further comprising: using the apparatus to interact with and manage the Moment (interacting with virtual objects (Finding: 0040)); and using the apparatus to download the Moment from the database via the server (other users accessing the shared content (Finding: 0071, 0033)).
Regarding claim 8, the combined teachings teach the method of claim 7, the method further comprising: using the apparatus to share access to the Moment with an account of another user (sharing AR content with other users (Finding: 0033)); and using the account of another user to access the Moment via the server (other users accessing the shared content).
Regarding claim 11, the combined teachings teach the method of claim 1, the method further comprising:
Using the apparatus to create and locally save a plurality of instances of content that defines a Trail (the Examiner construes user created shareable contents as a trail (Finding: 0021-0024, 0033));
Using the apparatus to associate and locally save the positioning data of each instance of content with each instance’s content to create the Trail (the content and parameters such as 3D coordinates are stored at the server (Finding: 0033));
Transmitting the Trail as user-created data to the server (storing the content on the server (Finding: 0033));
Storing the Trail as user-created data in a database in communication with the server (the resulting stored content on the server (Finding: 0033, 0079));
And managing the Trail and the user-created data within the local software application and within a server software application (the receiving and sharing of contents by the server and the required server software).
Regarding claim 12, the combined teachings teach the method of claim 11, the method further comprising: using the apparatus to interact with and manage the Trail (interacting with virtual objects (Finding: 0040)); and using the apparatus to download the Trail from the database via the server (users accessing the shared content via the server).
Claim(s) 13-15 are corresponding apparatus claim(s) of claim(s) 1, 3 and 4. The limitations of claim(s) 13-15 are substantially similar to the limitations of claim(s) 1, 3 and 4. Therefore, it has been analyzed and rejected substantially similar to claim(s) 13-15.
Claim(s) 16 and 18-20 are corresponding system claim(s) of claim(s) 1 and 3-5. The limitations of claim(s) 16 and 18-20 are substantially similar to the limitations of claim(s) 1 and 3-5. Therefore, it has been analyzed and rejected substantially similar to claim(s) 16 and 18-20.
Regarding claim 17, the combined teachings teach the system of claim 16, the system further comprising: the apparatus (AR device (Finding: 0074)).
Allowable Subject Matter
Claims 9 and 10 are allowed.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to David H Chu whose telephone number is (571)272-8079. The examiner can normally be reached M-F: 9:30 - 1:30pm, 3:30-8:30pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Daniel F Hajnik can be reached at (571) 272-7642. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/DAVID H CHU/Primary Examiner, Art Unit 2616