Prosecution Insights
Last updated: April 19, 2026
Application No. 18/722,274

Cloud Application Data Streaming Using Drawing Data

Final Rejection §102§103
Filed
Jun 20, 2024
Examiner
OCAK, ADIL
Art Unit
2426
Tech Center
2400 — Computer Networks
Assignee
Google LLC
OA Round
2 (Final)
74%
Grant Probability
Favorable
3-4
OA Rounds
2y 4m
To Grant
92%
With Interview

Examiner Intelligence

Grants 74% — above average
74%
Career Allow Rate
279 granted / 376 resolved
+16.2% vs TC avg
Strong +18% interview lift
Without
With
+18.3%
Interview Lift
resolved cases with interview
Typical timeline
2y 4m
Avg Prosecution
21 currently pending
Career history
397
Total Applications
across all art units

Statute-Specific Performance

§101
6.2%
-33.8% vs TC avg
§103
57.9%
+17.9% vs TC avg
§102
21.7%
-18.3% vs TC avg
§112
6.5%
-33.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 376 resolved cases

Office Action

§102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Amendment This Office Action is made in response to amendment, filed 2/27/2026. Applicant has amended claims 1 and 10. Claim 23 is added. Response to Arguments Applicant’s arguments see “Remarks”, made in an Amendment”, filed 3/11/2025. With respect to Claim Rejections - 35 U.S.C. §102 and 35 U.S.C. §103, for claim 1. The Applicant argues that Osman does not disclose an “application data stream” comprising both drawing data and one or more encoded video frames and instead streams a scene graph. However, Osman explicitly discloses transmission of encoded video frames to client devices. For example, Osman teaches that: “Clients 1410 are configured to receive encoded video streams, decode the video streams, and present the resulting video to a user.” (Osman para.0141). Further, Osman explains that the video stream includes video frames: “The video stream (and optionally audio stream) received by Clients 1410 is generated and provided by Video Server System 1420 … this video stream includes video frames.” (Osman para.0145). Osman further explains that the rendering logic generates video frames based on game state: “Video Source 1430 is configured to provide a video stream, e.g., streaming video or a series of video frames that form a moving picture.” (Osman para.0149). These disclosures show that Osman transmits encoded video frames in a video stream to client devices. Under the broadest reasonable interpretation, such a stream corresponds to the claimed application data stream including encoded video frames. Accordingly, Osman teaches transmitting encoded video frames associated with execution of the application. In response to Applicant’s argument regarding scene graph streaming: Applicant cites paragraph 0070 of Osman for the proposition that the system streams a scene graph rather that video. However, Osman describes multiple embodiments. Even where scene graph information may be transmitted between internal components, Osman clearly discloses embodiments in which rendered video frames are streamed to client devices. For example: “Video Server System 1420 is configured to provide the video stream to the one or more Clients 1410.” (Osman para.0140. Additionally, “The rendering logic produces raw video that is then usually encoded prior to communication to Clients 1410.” (Osman para.0150. Thus, Osman expressly teaches generating rendered video frames and transmitting them to client devices, which directly corresponds to the claimed encoded video frames. In response to the Applicant’s argument regarding displaying graphical objects with video frames: The Applicant further argues that Osman does not teach displaying rendered graphical objects concurrently with decoded video frames. However, Osman teaches that the video frames are generated based on game state, which includes properties, images, colors, and textures of objects in the game environment: “This game state includes the position of objects in a game environment, as well as typically a point of view. The game state may also include properties, images, colors and/or textures of objects.” (Osman para.0149). Rendering logic then produces video frames representing these objects: “The rendering logic produces raw video that is then usually encoded prior to communication to Clients.” (Osman para.0150). Therefore, the graphical objects of the game environment are rendered into the video frames themselves. Display of the decoded frames necessarily includes display of the rendered graphical objects, meeting the claimed limitation under the broadest reasonable interpretation. In response regarding claim 10: Applicant argues that Osman does not disclose transmitting an application data stream including both encoded video frames and drawing data. However, as discussed above, Osman explicitly discloses transmitting encoded video frames from a video server to client devices: “Clients 1410 are configured to receive encoded video streams, decode the video streams, and present the resulting video to a user.” (Osman para.0141). Additionally, Osman explains that the video server generates video frames representing the game state and viewpoint: “Video Source 1430 is configured to provide a video stream, e.g., streaming video or a series of video frames that form a moving picture.” (Osman para.0149). Thus, Osman teaches transmitting encoded video frames associated with execution of the application. The video stream inherently represents graphical drawing information generated by the rendering logic based on the game state. Under the broadest reasonable interpretation, this corresponds to the claimed application data stream including drawing data and encoded video frames. Applicant asserts that the addition references do not cure deficiencies in Osman, However, because Osman already teaches transmission of encoded video frames generated from application state to client devices, the cited references were applied to teach additional claimed features as stated in the Office Action. The Applicant submits that claims 1 and 10 are non-obvious over the cited references, making claims 9, 14, and 16 non-obvious over the cited references at least for the reason that they each depend from a non-obvious independent claim. Therefore, Applicant requests that the pending §102 claims 1, 10 with their respective dependent claims and §103 rejections of claims 9, 14, and 16 be withdrawn. In response, with respect to the applicant arguments of claims 1 and 10 together with their respective dependent claims, have been fully considered but they are non-persuasive and moot in view of the new grounds of rejection (see rejections below). The Examiner provides the following suggestions to facilitate compact prosecution and are not intended to be limiting or to suggest that the claims must be amended in the manner described. These suggestions are supported by Applicant’s specification and would help to narrow the claimed invention and may help to overcome Osman: Applicant may wish to amend the claims to further clarify the relationship between the recited drawing data and the encoded video frames transmitted from the cloud-based application server. In particular, Applicant may consider clarifying that the data drawing causes the client device to render graphical objects. The specification describes embodiments in which the client device renders graphical objects based on drawing data received from the server (see, e.g., para(s).0016-0017). Clarifying that the client device renders graphical objects based on the drawing data may help distinguish architectures in which graphical content is fully rendered on the server and transmitted solely as encoded video frames. Applicant may also wish to amend the claims to further clarify that the rendered graphical objects are displayed concurrently with decoded video frames at the client device. The specification describes embodiments in which the client device displays rendered graphical objects concurrently with decoded video frames (see, e.g., para.0028). Clarifying this relationship between the rendered graphical objects and the decoded video frames may further define how graphical elements generated from the drawing data are presented together with the video content at the client device. Additionally, Applicant may wish to clarify that the drawing data includes instructions or data that cause the client device to generate graphical objects locally. The specification describes that graphical objects may be generated by the client device based on received drawing data (see, e.g., para(s).0016-0017). Such clarification may further emphasize that graphical objects can be generated at the client device based on drawing data rather than solely being included with encoded video frames transmitted from the server. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention. Claims 1-8, 10-12, 15, 17-20, 23 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Osman et al., Pub No US 2016/0361658 (hereafter Osman). Regarding Claim 1, Osman discloses a method comprising: initiating application streaming between a client device and a cloud-based application server [FIG.10, para.0140: Discloses a Game System (element 1400) is configured to provide a video stream to one or more Clients (element 1410) via a Network (element 141)5. Game System (element 1400) typically includes a Video Server System (element 1420) … configured to provide the video stream to the one or more Clients (element 1410). Thus, teaches a cloud gaming architecture in which a video server system streams application output (video stream) to the client devices over a network, corresponding to initiating streaming between a client device and a cloud-based server.]; receiving an application data stream associated with an application running on the cloud-based application server [FIG.10, para.0145: Discloses the video stream (and optionally audio stream) received by Clients (element 1410) is generated and provided by Video Server System (element 1420). This teaches that the client device receives a data stream from the server executing the application (game), corresponding to receiving an application data stream from a cloud-based application server.], wherein the application data stream comprises drawing data and one or more encoded video frames [para.0145: Discloses as is described further elsewhere herein, this video stream includes video frames. The video stream generated from the application includes encoded video frames representing the rendered output of the application, which includes graphical drawing information. The rendering logic renders graphical objects of the game environment.]; rendering one or more graphical objects based on the drawing data thereby creating one or more rendered graphical objects [FIG.10, para.0149: Discloses Video Source (element 1430) includes a video game engine and rendering logic; and para.0150: Discloses the rendering logic produces raw video. The rendering logic of the video source renders graphical objects of the game environment, producing video output representing rendered graphical objects.]; decoding the one or more encoded video frames thereby creating one or more decoded video frames [FIG.10, para.0141: Discloses Clients (element 1410) are configured to receive encoded video streams, decode the video streams, and present the resulting video to a user. This explicitly teaches decoding encoded video frames.]; displaying the one or more rendered graphical objects concurrently with the one or more decoded video frames [para.0150: Discloses rendering logic producing raw video based on game state including objects in the game environment, which correspond to the claimed graphical objects appearing within the video frames that are decoded and displayed. The rendered graphical objects appear within the video frames that are decoded and displayed.]; and transmitting an input data stream to the cloud-based application server, the input data stream comprising data representing at least one input received at the client device [FIG.10, para.0146: Discloses the received game commands are communicated from Clients (element 1410) via Network (element 1415) to Video Server System (element 1420) and/or Game Server (element 1425). This teaches transmission of user input from the client device back to the server hosting the application, corresponding to transmitting an input data stream.]. Regarding Claim 2, Osman discloses the method of claim 1, and Osman further discloses further comprising: receiving a modified application data stream [para(s).0067-0077: Discloses the cloud rendering service (element 410) processes the game-state data (element 408) to generate a wide -field-of-view video … and transmits the rendered video (element 414) to a video server (element 416) …serves the videos over the network (element 406) for viewing by one or more spectators; and para.0123: Discloses the player provides input to a cloud gaming server, which takes those inputs, runs the game, renders the video, and streams it back to the player.]; and displaying at least a portion of the modified application data stream based on modified drawing data [para(s).0067-0072: Discloses the scene graph …describes the state of the video game that defines the interactive gameplay, including the position and orientation of objects, changes to the rendering settings of the world, and the like; and para.0123: Discloses the gaming server receives player inputs and renders the game accordingly, streaming the results back to the player; and para.0141: Discloses the client devices (element 1410) receive a video stream from the game server (element 1425) and display the rendered video on the client display.]. Regarding Claim 3, Osman discloses the method of claim 1, and Osman further discloses further comprising: receiving one or more inputs [para.0123: Discloses the player provides input to a cloud gaming server, which takes those inputs, runs the game, renders the video, and streams it back to the player; and para.0118: Discloses in a cloud-based gaming scenario, a player device provides inputs … which are received by a cloud server that executes the video game logic and streams the resulting video back to the player device.]; and encoding the one or more inputs, wherein the input data stream includes the encoded one or more inputs [para(s).12, 0049, 0067-0072, 0123: Discloses player’s client device encodes its local input and game-state data into a transmissible form (structured packets), which is then transmitted as an input data stream to the cloud server. Encoding here is for network transport.]. Regarding Claim 4, Osman discloses the method of claim 3, and Osman further discloses wherein the one or more inputs include video captured by a camera of the client device [FIG.9, para(s).0052-0053, 0063, 0106, 0123, 0132: Discloses input video captured by a camera of the client device.]. Regarding Claim 5, Osman discloses the method of claim 1, and Osman further discloses further comprising: outputting at least a portion of the application data stream using one or more output devices [para(s).0141-0142: Discloses output devices may include head mounted displays, terminals, personal computers, game consoles, tablet computers, telephones, set top boxes, kiosks, wireless devices, digital pads, stand-alone devices, handheld game playing devices, and/or the like. Client devices output video and audio streams via display and speaker components (also output devices).]. Regarding Claim 6, Osman discloses the method of claim 5, and Osman further discloses wherein the one or more output devices comprises a speaker of the client device [FIG.9 element 1320, para(s).0062, 0133: Discloses speaker.]. Regarding Claim 7, Osman discloses the method of claim 1, and Osman further discloses wherein the drawing data includes a scene graph for at least a portion of the application data stream [para(s).0067-0072: Discloses the drawing data includes the scene graph, which defines objects, positions, orientations, and rendering settings used to generate video frames.]. Regarding Claim 8, Osman discloses the method of claim 1, and Osman further discloses the method further comprising: decoding at least a portion of the application data stream and displaying the decoded at least a portion of the application data stream according to the drawing data [para(s).0070-0072, 0141-0143: Discloses clients (element 1410) are configured to receive encoded video streams, decode the video streams, and present the resulting video to a user, e.g., a player of a game. The processes of receiving encoded video streams and/or decoding the video streams typically includes storing individual video frames in a receive buffer of the client. The video streams may be presented to the user on a display integral to client (element 1410) or on a separate device such as a monitor or television according to the scene-graph-based drawing data.]. Regarding Claim 10, Osman discloses a method comprising: initiating application streaming between a client device and a cloud-based application server [FIG.10, para.0140: Discloses a Game System (element 1400) is configured to provide a video stream to one or more Clients (element 1410) via a Network (element 1415) … Video Server System (element 1420) may receive a game command that changes the state of or a point of view within a video game, and provide Clients (element 1410). Thus, discloses a cloud gaming architecture where a server streams application output to client devices over a network.; encoding one or more video frames associated with an application running on the cloud-based application server, thereby creating one or more encoded video frames [FIG.10, para.0150: Discloses the rendering logic produces raw video that is then usually encoded prior to communication to Clients (element 1410). This discloses that video frames generated from the application are encoded before being transmitted to the client, corresponding to encoding video frames associated with the cloud-executed application.]; transmitting, to the client device, an application data stream including the one or more encoded video frames and drawing data associated with the application [FIG.10, para.0140: Discloses the video stream (and optionally audio stream) received by Clients (element 1410) is generated and provided by Video Server System (element1420) … this video stream includes video frames. Thus, teaches transmission of a video stream from the server to the client. The video frames generated by the rendering logic contain the graphical drawing information produced by the application, corresponding to transmitting an application data stream that includes encoded video frames.], wherein the drawing data includes one or more commands for rendering graphical objects by the client device [FIG.10, para.0144: Discloses Clients (element 1410) is configured to perform further rendering … to overlay one video image on another video image, to crop a video image, and/or the like. This passage teaches that the client device can perform additional rendering operations on received video content, which corresponds to the claimed drawing data including commands for rendering graphical objects at the client device. The rendering logic renders graphical objects of the game environment.]; receiving an input data stream from the client device representing an input received at the client device [FIG.10, para.0146: Discloses Clients (element 1410) are typically configured to receive inputs from a user ... The received game commands are communicated from Clients (element 1410) via Network (element 1415) to Video Server System (element 1420) and/or Game Server (element 1425). Teaches that user input from the client device is communicated to the server through the network, corresponding to receiving an input data stream representing user input.]; and in response to receiving the input data stream from the client device, modifying the application based on the input [para.0149: Discloses the video game engine is configured to receive game commands from a player and to maintain a copy of the state of the video game based on the received commands The game engine modifies the game state based on received user commands, corresponding to modifying the application based on the received input data stream.]. Regarding Claim 11, Osman discloses the method of claim 10, Osman further discloses further comprising: transmitting, to the client device, a modified application data stream including an encoded modified video frame associated with the modified application and modified drawing data associated with the modified application [para(s).0067-0077, 0123: Discloses the cloud rendering service (element 410) transmits encoded modified video frames reflecting updates in the application state and modified drawing data generated from updated scene-graph information after receiving client input.]. Regarding Claim 12, Osman discloses the method of claim 10, Osman further discloses further comprising: decoding at least a portion of the input data stream and providing the decoded at least a portion of the input data stream to the application as an input [para.0123: Discloses the cloud gaming server receives encoded input data from the client, and takes those inputs and runs the game (decoding) and renders the video and streams back to the player.]. Regarding Claim 15, Osman discloses the method of claim 10, Osman further discloses wherein the drawing data comprises a scene graph for at least a portion of the application data stream [para(s).0067-072: Discloses the scene-graph is explicitly described as defining object, positions, orientations, and rendering settings for the scene. It forms a core component of the drawing data used to generate application video frames.]. Regarding Claim 17, Osman discloses the method of claim 10, Osman further discloses wherein the application comprises at least one of a video calling application, presentation application, meeting application, or streaming application [para(s).0012, 0067-0077: Discloses a cloud-based game streaming application that performs real-time streaming and playback.]. Regarding Claim 18, Osman discloses the method of claim 1, Osman further discloses wherein the client device comprises a smartphone [para(s).0055, 0069, 0141: Discloses smartphone.]. Regarding Claim 19, Osman discloses the method of claim 1, Osman further discloses wherein the client device comprises a browser [para.0158: Discloses may operate on a browser.]. Regarding Claim 20, Osman discloses the method of claim 1, Osman further discloses wherein at least a portion of the application data stream is displayable at a plurality of resolutions on the client device [para(s).0151, 0158: Discloses displayed objects request that the user enter information such as operating system, processor, video decoder type, type of network connection, display resolution (any resolution entered - plural), etc. of Client. The information entered by the user is communicated back to Client Qualifier.]. Regarding Claim 23, Osman discloses one or more non-transitory computer readable media [FIG.10, para.0148: Discloses non-transitory storage (element 1455).] comprising program instructions executable by one or more processors [FIG.10, para(s).0148, 0155: Discloses a processor (element 1450) executes software instructions in order to perform the functions.] to perform operations comprising: initiating application streaming between a client device and a cloud-based application server [FIG.10, para.0140: Discloses a Game System (element 1400) is configured to provide a video stream to one or more Clients (element 1410) via a Network (element 1415) … Video Server System (element 1420) may receive a game command that changes the state of or a point of view within a video game, and provide Clients (element 1410). Thus, discloses a cloud gaming architecture where a server streams application output to client devices over a network.]; receiving an application data stream associated with an application running on the cloud-based application server [FIG.10, para.0145: Discloses the video stream (and optionally audio stream) received by Clients (element 1410) is generated and provided by Video Server System (element 1420). The client device receives the application data stream generated by the server.], wherein the application data stream comprises drawing data and one or more encoded video frames [para.0145: Discloses this video stream includes video frames; and FIG.10, para.0150: Discloses the rendering logic produces raw video that is then usually encoded prior to communication to Clients (element 1410). Encoded video frames produced from rendering operations correspond to the claimed application data stream.]; rendering one or more graphical objects based on the drawing data thereby creating one or more rendered graphical objects [FIG.10, para.0149-0150: Discloses Video Source (element 1430) includes a video game engine and rendering logic … The rendering logic produces raw video. The rendering logic renders graphical objects of the game environment.]; decoding the one or more encoded video frames thereby creating one or more decoded video frames [FIG.10, para.0141: Discloses Clients (element 1410) are configured to receive encoded video streams, decode the video streams, and present the resulting video to a user. This explicitly teaches decoding encoded video frames.]; displaying the one or more rendered graphical objects concurrently with the one or more decoded video frames [para.0150: Discloses rendering logic producing raw video based on game state including objects in the game environment, which correspond to the claimed graphical objects appearing within the video frames that are decoded and displayed. The rendered graphical objects appear within the video frames that are decoded and displayed.]; and transmitting an input data stream to the cloud-based application server, the input data stream comprising data representing at least one input received at the client device [FIG.10, para.0146: Discloses the received game commands are communicated from Clients (element 1410) via Network (element 1415) to Video Server System (element 1420) and/or Game Server (element 1425). This teaches transmission of client input to the server.]. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 9, 14 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Osman et al., Pub No US 2016/0361658 (hereafter Osman) and further in view of Sundberg et al., Pat No US 10,452,868 (hereafter Sundberg). Regarding Claim 9, Osman discloses the method of claim 1, and although Osman discloses the scene graph updates object attributes (position, orientation, rendering state) relative to prior frames [para(s).0067-0072], implying differences between graphical objects, Osman does not explicitly disclose wherein the drawing data comprises a value representing a difference between attributes of a graphical object and a reference graphical object (emphasis representing what’s not taught by Osman). However, in analogous art, Sundberg discloses (col.13 lines 51-60) draw commands causing the client browser to render one or more layers to a cache (e.g., a resource cache), with one or more portions of the cache being rendered on a display of the client device and one or more other portions of the cache not being rendered until one or more subsequent events, such as a user input command (for example, a scroll event). Examples of resources that may be rendered to the resource cache include textures, fonts, shapes, curves, draw commands, predefined combinations thereof, or others. The resource cache facilitates the isolated remote application instance providing the client application smaller amounts of data than the resources themselves, such as identifiers that correspond to respective resources in the resource cache (col.13 lines 63-67). Sundberg discloses sending draw commands and delt-style updates rather than pixels, and renders to a cache on the client. A direct “difference/updates” paradigm relative to existing render state (col.13 lines 43-col.12 lines 5-21). Thus, sending updated portions (i.e. deltas) versus full pixel blocks, captures the “difference from prior (reference) state” concept functionally. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to further modify Osman with this feature, as taught by Sundberg in order to yield predictable result such as minimizes the amount of data to be transferred (Sundberg: col.3 lines 66-67). Regarding Claim 14, Osman discloses the method of claim 10, Osman further discloses further comprising: determining the drawing data based on the application [para(s).0067-0072: Discloses the scene graph and rendering parameters are generated by the application game engine executing on the cloud server. The drawing data, directly depends on the application’s state and logic, which determines object positions, orientations, and rendering settings.]; and while Osman describes the system maintaining scene-graph and game-state data in memory for rendering [para(s).0067-0072], Osman does not explicitly disclose storing the drawing data in a drawing data cache. However, in analogous art, Sundberg discloses rendering to a cache/resource cache and updating layers (a drawing-data cache) (col.19, lines 63-67). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to further modify Osman with this feature, as taught by Sundberg in order to yield predictable result such as minimizes the amount of data to be transferred (Sundberg: col.3 lines 66-67). Regarding Claim 16, Osman discloses the method of claim 10, and although Osman discloses the scene graph updates object attributes (position, orientation, rendering state) relative to prior frames [para(s).0067-0072], implying differences between graphical objects, Osman does not explicitly disclose wherein the drawing data comprises a value representing a difference between attributes of a graphical object and a reference graphical object (emphasis representing what’s not taught by Osman). However, in analogous art, Sundberg discloses (col.13 lines 51-60) draw commands causing the client browser to render one or more layers to a cache (e.g., a resource cache), with one or more portions of the cache being rendered on a display of the client device and one or more other portions of the cache not being rendered until one or more subsequent events, such as a user input command (for example, a scroll event). Examples of resources that may be rendered to the resource cache include textures, fonts, shapes, curves, draw commands, predefined combinations thereof, or others. The resource cache facilitates the isolated remote application instance providing the client application smaller amounts of data than the resources themselves, such as identifiers that correspond to respective resources in the resource cache (col.13 lines 63-67). Sundberg discloses sending draw commands and delt-style updates rather than pixels, and renders to a cache on the client. A direct “difference/updates” paradigm relative to existing render state (col.13 lines 43-col.12 lines 5-21). Thus, sending updated portions (i.e. deltas) versus full pixel blocks, captures the “difference from prior (reference) state” concept functionally. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to further modify Osman with this feature, as taught by Sundberg in order to yield predictable result such as minimizes the amount of data to be transferred (Sundberg: col.3 lines 66-67). Claim 13 is rejected under 35 U.S.C. 103 as being unpatentable over Osman et al., Pub No US 2016/0361658 (hereafter Osman) and further in view of Fuller et al., Pat No US 6,711,622 (hereafter Fuller). Regarding Claim 13, Osman discloses the method of claim 12, Osman does not explicitly disclose wherein the at least a portion of the input data stream includes encoded video captured from a camera of the client device. However, in analogous art, Fuller discloses video information was captured from a video camera and digitized. The digital video information was then encapsulated in a MIME encoded multipart data stream. The client received this data stream and reconstructed frames of the digital video (col.2 lines 41-46). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to further modify Osman with this feature, as taught by Fuller in order to yield predictable result such as providing a platform independent video and audio streaming system that does not require the user to download additional programs beyond the functionalities found in a browser (Fuller: col.2 lines 47-50). Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action. The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Abdo et al., (US 2010/0228871) – Discloses directing user inputs at client device to a server and display user interface information generated by the server [para.0021]. Discloses a server may examine a received stream of encoded drawing orders and place "begin" and "end" frame markers where appropriate, executes on the server [para.0038-0039]. Richard Parr (US 2021/0304504) – Discloses operation 602, which represents receiving augmented reality drawing data from a network device. Operation 604 represents obtaining current video scene data from a camera of the client device. Operation 606 represents drawing, based on the augmented reality drawing data, augmented reality graphics over the current video scene to provide an augmented reality scene. Operation 608 represents rendering a representation of the augmented reality scene on a display of the client device [para.0036]. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ADIL OCAK whose telephone number is (571) 272-2774. The examiner can normally be reached on M-F 8:00 AM - 5:00 PM. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Nasser Goodarzi can be reached on 571-272-4195. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system; contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ADIL OCAK/Primary Examiner, Art Unit 2426
Read full office action

Prosecution Timeline

Jun 20, 2024
Application Filed
Nov 07, 2025
Non-Final Rejection — §102, §103
Feb 27, 2026
Response Filed
Mar 12, 2026
Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12598348
METHODS AND APPARATUS TO CREDIT MEDIA SEGMENTS SHARED AMONG MULTIPLE MEDIA ASSETS
2y 5m to grant Granted Apr 07, 2026
Patent 12598334
LIVE-STREAMING STARTING METHOD, DEVICE AND PROGRAM PRODUCT
2y 5m to grant Granted Apr 07, 2026
Patent 12586039
Chat And Email Messaging Integration
2y 5m to grant Granted Mar 24, 2026
Patent 12574591
SYSTEM AND METHOD FOR PROVIDING ENHANCED AUDIO FOR STREAMING VIDEO CONTENT
2y 5m to grant Granted Mar 10, 2026
Patent 12572588
Local Public Notification Network Mediation
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
74%
Grant Probability
92%
With Interview (+18.3%)
2y 4m
Median Time to Grant
Moderate
PTA Risk
Based on 376 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month