DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claim(s) 1-2, 4-12 and 14-20 is/are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Du et al1 (“Du”).
Regarding claim 1, Du teaches a system comprising (note the system is addressed below with it comprising the apparatus configured to perform the operations in the body of the claim as addressed below): at least one processor; at least one memory component storing instructions that, when executed by the at least one processor, cause the at least one processor to perform operations comprising (see Du, paragraph 0007 teaching “the techniques described herein relate to a computing device, including: a computing device display; a processor; and a memory configured with instructions to cause the processor to: generate an in-browser camera list, the in-browser camera list being selectable by a user within a browser; and add a virtual camera to the inbrowser camera list, the virtual camera being operable to receive a physical camera frame from a physical camera listed in the in-browser camera list and generate a modified frame based on a local browser setting” such that here the same components are utilized to perform the functions as will be further addressed below):
detecting selection of a virtual camera added as an extension in an internet browser (see Du, paragraphs 0018-0021 teaching “generating an in-browser camera list that is selectable by a user. The in-browser camera list may include a list of cameras, for example a list of physical cameras internal or external to a computing device, received from an operating system kernel to which the virtual camera has been added” where “the virtual camera may be installed for use within a browser via a plugin, add-on, or extension. Plugins, add-ons, and extensions are software additions that allow for the customization of web browsers” such that here the system is able to detect selection of a virtual camera which is added as an extension in a web browser);
generating a new canvas element for the virtual camera (note that a “generating a canvas element” is extremely as the manner of generation is not limited, nor specifically defined, and further neither “canvas” nor “element” is specifically limited and when interpreted as a functional element such a canvas is considered to be any drawable area or render target and is any functional element that can be drawn to and can be any type of surface to which something can be addressed – then it is necessary to interpret “canvas element” considered as recited together as a canvas element could be the canvas itself or could also be any element relating to a canvas such that some setting or function that defines, changes or modifies a canvas could be considered a canvas element; thus see Du, paragraph 0018-0021 teaching “Upon receiving an indication that a browser is accessing a virtual camera from the in-browser camera list, the method may include receiving a physical camera frame from a physical camera associated with the virtual camera. Next a modified frame may be generated based on the physical camera frame and a local browser setting” and “Next the modified frame may be sent displayed in the browser display and/or sent to another user participating in the video conference via another client application” and “result is that the modified frames from the virtual camera are seamlessly substituted for the physical camera feed” where here selecting a virtual camera generates a new canvas element for the system, as previously the cameras in the in-browser camera list provided the frames or surfaces or canvases for display, and thus selection of the virtual camera instead causes the canvas element to be the new surface corresponding to the virtual camera frames which take in a physical video frame feed and use this as a new canvas element for the virtual camera allowing the addition of selected AR effects or other filter effects, and thus the output area of the virtual camera is the new canvas element that is created and for example note that “the modified frame may be sent displayed in the browser display and/or sent to another user participating in the video conference via another client application” and “virtual camera receives frames from a physical camera, modifies those physical camera frames, and outputs the modified frames. The modified frames may be displayed, saved, or sent to another computing device” means that the canvas element being written to corresponds to the modified frame which is not actually displayed until sent to a display and for example the modified image frames could simply be “saved” such that again this means the modified images correspond to the new canvas element being drawn to where such canvas element may then be displayed – see further paragraphs 0022-0024 and figure 1A where “virtual camera display” corresponds to the new canvas element created for the virtual camera such that the result of displaying the modified image frame after being sent for display is “virtual camera display 106” which also may be considered to be a new canvas element that is generated each time the image is rendered as this is a canvas for display as well);
accessing video frames captured by a hardware camera coupled with a computing device (see Du, paragraphs 0018-0020 teaching “method may include receiving a physical camera frame from a physical camera associated with the virtual camera. Next a modified frame may be generated based on the physical camera frame and a local browser setting” such that here the system accesses the physical hardware camera coupled with the computing device);
detecting a selection of an augmented reality (AR) option (see Du, paragraphs 0018-0020 teaching “method may include receiving a physical camera frame from a physical camera associated with the virtual camera. Next a modified frame may be generated based on the physical camera frame and a local browser setting” and “local browser setting may be selectable by a user and enable features such as filters, animations, visual text interpretations, and so forth” where this local browser setting that augments the reality of the user is selected and is detecting of a selection of an AR option and this is what modifies the physical camera feed where paragraphs 0045-0048 provide additional options for selection of AR settings to apply such as “filter effects may comprise changing the pixels of a physical camera frame in an algorithmic way. For example, filter effects may comprise changing colors, creating brush or sketch effects, blurring, sharpening, brightening, changing the contrast, overlaying effects (e.g., a sunburst effect), etc. Sketch filter settings 112 in virtual camera setup 102 is one example of local browser setting 258” and “local browser setting 258 may relate to one or more animation effects. For example, animation effects may include generating an avatar, animal, or fanciful character (e.g., a robot character) that mimics the facial expression of the user. An example local browser setting 258 relating to an animation may comprise a drop-down box selecting an animal or character” and “text effects may include displaying a text interpretation of speech in the modified frame, similar to text effect 116 from FIG. IB, described above. The text interpretation may be generated from speech recorded in an audio track associated with the physical camera frames. In examples, text effects may include translating or summarizing speech. In examples, text effects may include animating the text to express context in the speech. In examples, the text effects may include moving the text interpretation around virtual camera display virtual camera display 107 in response to the first user moving. An example local browser setting 258 for a text effect may include selecting a language for translation, for example” such that these are all examples of AR options that can be selected from and detected in order to produce the modified image frame on the canvas element);
applying the AR option to each video frame of see Du, paragraphs 0018-0020 teaching “a modified frame may be generated based on the physical camera frame and a local browser setting. The local browser setting may be selectable by a user and enable features such as filters, animations, visual text interpretations, and so forth. Next the modified frame may be sent displayed in the browser display and/or sent to another user participating in the video conference via another client application. The result is that the modified frames from the virtual camera are seamlessly substituted for the physical camera feed” and as in paragraphs 0045-0048 as explained above the selected AR option is applied which modifies the physical image of the camera feed in the virtual camera canvas element to create a modified image according to whatever the AR option selected was) at a predefined rate, by selecting the subset of the accessed video frames based on a predefined number of video frames per second (note that the claim does not define how such predefined number of frames is determined and does not actually require determining any predefined number of video frames per second so long as the selecting of the subset is in some way based on a predefined number of video frames per second, where it is also left undefined 0.see Du, paragraphs 0018-0020 as explained above where the AR option selected is applied at a predefined rate that matches the physical camera feed as “the modified frames from the virtual camera are seamlessly substituted for the physical camera feed” such that it is applied at a rate predefined to substitute each frame of the physical camera feed where “virtual camera receives frames from a physical camera, modifies those physical frames, and outputs the modified frames” where this means that from the perspective of applying the AR effect by the virtual camera, the predefined rate of applying is by selecting the subset of the accessed video frames at a 1:1 rate based on a predefined number of video frames per second where this predefined number of video frames per second corresponds to whatever rate that the physical camera is supplying as the physical camera feed is providing the frames at some rate which is predefined from the perspective of the virtual camera and then each of these frames is selected as the subset to which the AR effect is applied; see further paragraph 0026 teaching, “Once a browser is configured to use a virtual camera, the first user may join a video conference substituting the virtual camera feed for the physical camera feed” such that as the modified image effects are applied to each frame from the physical camera then they are applied at whatever the predefined rate of capture and display the video camera necessarily has such as for example any of the physical cameras and at this point each of the video frames is a subset of the accessed video frames as the frames have been accessed prior to joining of the video conference;);
providing each video frame of the subset of the accessed video frames comprising the applied AR option to the new canvas element (see Du, paragraphs 0018-0020 teaching “a modified frame may be generated based on the physical camera frame and a local browser setting. The local browser setting may be selectable by a user and enable features such as filters, animations, visual text interpretations, and so forth. Next the modified frame may be sent displayed in the browser display and/or sent to another user participating in the video conference via another client application. The result is that the modified frames from the virtual camera are seamlessly substituted for the physical camera feed” and as in paragraphs 0045-0048 as explained above the selected AR option is applied which modifies the physical image of the camera feed in the virtual camera canvas element to create a modified image according to whatever the AR option selected was and the so modified video image frame is provided as the ”modified image” and this element that is “sent [to be] displayed” is thus a provided video frame with the AR element option applied provided to the canvas element which is sent to be displayed, or again as noted above the display target that renders the modified image may also be considered provided such video frames to the new canvas element where this display target is the new canvas element); and
causing display, of each video frame of the subset of the accessed video frames comprising the applied AR option in the new canvas element, on a user interface of the computing device (see Du, paragraphs 0018-0020 teaching “a modified frame may be generated based on the physical camera frame and a local browser setting. The local browser setting may be selectable by a user and enable features such as filters, animations, visual text interpretations, and so forth. Next the modified frame may be sent displayed in the browser display and/or sent to another user participating in the video conference via another client application. The result is that the modified frames from the virtual camera are seamlessly substituted for the physical camera feed” and as in paragraphs 0045-0048 as explained above the selected AR option is applied which modifies the physical image of the camera feed in the virtual camera canvas element to create a modified image according to whatever the AR option selected was and the so modified video image frame is provided as the ”modified image” and this element that is “sent [to be] displayed” is thus a provided video frame with the AR element option applied provided to the canvas element which is sent to be displayed, or again as noted above the display target that renders the modified image may also be considered provided such video frames to the new canvas element where this display target is the new canvas element).
Regarding claim 2, Du teaches all that is required as applied to claim 1 above and further teaches wherein the predefined number of video frames per second is determined based on a maximum rate of processing capability of the system (note that the claims do not require any actual determination of a maximum rate of processing capability of the system, nor how such a maximum rate of processing capability would affect the predefined number of video frames per second, nor does the claim limit what “a maximum rate of processing capability of the system” is defined as such that any processing capability of any component of the system which can be seen to have a maximum rate is within the scope of the limitation; thus, see Du, paragraphs 0018-0020 as explained above where the predefined rate corresponds to a predefined number of video frames per second as “the modified frame may be sent displayed in the browser display and/or sent to another user participating in the video conference via another client application. The result is that the modified frames from the virtual camera are seamlessly substituted for the physical camera feed” and as discussed above the effects may be applied to each video frame such that the rate of application matches the video feed from the physical camera meaning that the predefined rate is a predetermined number of video frames per second corresponding to the physical camera, this in turn means that the predefined rate of applying which is based on the predefined number of video frames per second being received from the physical camera, is determined based on a maximum rate of processing capability of the system where this maximum rate of processing ability of the system corresponds to the maximum rate for processing which is the disclosed one application of AR effect per one image frame received).
Regarding claim 4, Du teaches all that is required as applied to claim 1 above and further teaches wherein the selection of the virtual camera is made via a list of camera devices in the user interface (see Du, paragraph 0018 teaching “generating an in-browser camera list that is selectable by a user. The in-browser camera list may include a list of cameras, for example a list of physical cameras internal or external to a computing device, received from an operating system kernel to which the virtual camera has been added. Upon receiving an indication that a browser is accessing a virtual camera from the in-browser camera list, the method may include receiving a physical camera frame from a physical camera associated with the virtual camera”).
Regarding claim 5, Du teaches all that is required as applied to claim 1 above and further teaches wherein the selection of the AR option is made via a list of AR options displayed in the user interface (see Du, paragraph 0018 teaching the AR options which are the “local browser setting” which “may be selectable by a user and enable features such as filters, animations, visual text interpretations, and so forth” and as in paragraphs 0045-0048 the options are selectable by a user on the interface where “rowser application 254 may include a local browser setting 258. Local browser setting 258 may include one or more configurations relating to the virtual camera 262” and “local browser setting 258 relating to an animation may comprise a drop-down box selecting an animal or character” and “Sketch filter settings 112 in virtual camera setup 102 is one example of local browser setting 258” where selection from a drop-down list is selection via list and as in paragraph 0078 “the local browser setting is selectable in the browser and includes one or more of: a filter effect, an animation effect, a text effect, or a sound effect” such that here selection in the browser from such selectable options means they are displayed in some list form in order to select the listed option and such selection from a displayed list can also be seen in paragraph 0025 and figures 1A and 1B teaching “virtual camera setup 102 includes a filter selector 110, for which a sketch type filter is selected. As may be seen, the sketch filter has been applied to physical camera display 104 in virtual camera display 106, generating a modified frame that looks much like a sketch drawing. Other types of browser settings are possible as well, as further described below”).
Regarding claim 6, Du teaches all that is required as applied to claim 1 above and further teaches wherein the user interface is displayed via the internet browser (see Du, paragraph 0078 teaching “the local browser setting is selectable in the browser and includes one or more of: a filter effect, an animation effect, a text effect, or a sound effect” and see paragraph 0025 and figures 1A and 1B teaching “local browser settings may facilitate the application of any combination of filter effects, animation effects, text effects, and/or sound effects. In the example, virtual camera setup 102 includes a filter selector 110, for which a sketch type filter is selected. As may be seen, the sketch filter has been applied to physical camera display 104 in virtual camera display 106, generating a modified frame that looks much like a sketch drawing. Other types of browser settings are possible as well, as further described below” such that it can be seen that the user interface is displayed via the internet browser where the program is being used).
Regarding claim 7, Du teaches all that is required as applied to claim 1 above and further teaches generating an off-screen canvas element (note that this concept has already been addressed where the off-screen canvas element may be considered the “modified image” frame target before the frame is sent by the virtual camera output for actual display, recording, or transmission and thus this canvas element may be considered an off-screen canvas element as in paragraphs 0018-0020 teaching ”Next the modified frame may be sent displayed in the browser display and/or sent to another user participating in the video conference via another client application. The result is that the modified frames from the virtual camera are seamlessly substituted for the physical camera feed” and “virtual camera receives frames from a physical camera, modifies those physical camera frames, and outputs the modified frames. The modified frames may be displayed, saved, or sent to another computing device” such that here the “outputs the modified frames” refers to an off-screen canvas element in order to be sent to the new canvas element of the actual display target for the modified frame such as created by the display circuitry as in paragraph 0023 showing the “display of the modified frames” in the new canvas element “virtual camera display 106” as also taught in paragraphs 0062-0063 teaching “virtual camera 262 may render modified camera frame 308” and “may send modified camera frame 308 to a display processing module. The display processing module may comprise a browser display such as virtual camera display” such that again “modified camera frame 308” is rendered as an off-screen canvas element as it is not displayed until it is sent to the new canvas element which is the “browser display such as virtual camera display 106”); and
wherein the AR option is applied to each video frame of the subset of the accessed video frames in the off-screen canvas element and provided from the off-screen canvas element to the new canvas element (see Du, paragraphs 0018-0020 teaching ”Next the modified frame may be sent displayed in the browser display and/or sent to another user participating in the video conference via another client application. The result is that the modified frames from the virtual camera are seamlessly substituted for the physical camera feed” and “virtual camera receives frames from a physical camera, modifies those physical camera frames, and outputs the modified frames. The modified frames may be displayed, saved, or sent to another computing device” such that here the “outputs the modified frames” refers to an off-screen canvas element in order to be sent to the new canvas element of the actual display target for the modified frame such as created by the display circuitry as in paragraph 0023 showing the “display of the modified frames” in the new canvas element “virtual camera display 106” as also taught in paragraphs 0062-0063 teaching “virtual camera 262 may render modified camera frame 308” and “may send modified camera frame 308 to a display processing module. The display processing module may comprise a browser display such as virtual camera display” such that again “modified camera frame 308” is rendered as an off-screen canvas element as it is not displayed until it is sent to the new canvas element which is the “browser display such as virtual camera display 106”).
Regarding claim 8, Du teaches all that is required as applied to claim 1 above and further teaches wherein the AR option is a first AR option (see Du, paragraphs 0018-0020 teaching “modified frame may be generated based on the physical camera frame and a local browser setting. The local browser setting may be selectable by a user and enable features such as filters, animations, visual text interpretations, and so forth” where the local browser settings are AR options and a user may select one of them as in paragraphs 0022-0024 teaching an AR option selected as a “sketch effect”)and the operations further comprise:
detecting a selection of a second AR option (see Du, paragraphs 0022-0030 and figures 1A and 1B where a second AR option has been selected as where “Virtual camera display 107 differs from virtual camera display 106 in that it includes a text effect 116. Further text effect 116 includes a text transliteration of speech from the first user. This may be further seen by the filter selected in filter selector 111, which is “sketch with text.” In examples, it may be possible to combine any number of effects in virtual camera display 107”);
applying the second AR option to each video frame of at least a second subset of the accessed video frames, at the predefined rate (see Du, paragraphs 0022-0030 and figures 1A and 1B where a second AR option has been applied to each video frame of the second subset of frames that is now being accessed from the physical camera and applied at a predefined rate corresponding to the frames per second of the video feed where “Virtual camera display 107 differs from virtual camera display 106 in that it includes a text effect 116. Further text effect 116 includes a text transliteration of speech from the first user. This may be further seen by the filter selected in filter selector 111, which is “sketch with text.” In examples, it may be possible to combine any number of effects in virtual camera display 107”);
providing each video frame of the second subset of the accessed video frames comprising the applied second AR option to the new canvas element (see Du, paragraphs 0022-0028 and figures 1A and 1B above teaching “Virtual camera display 107 differs from virtual camera display 106 in that it includes a text effect 116. Further text effect 116 includes a text transliteration of speech from the first user. This may be further seen by the filter selected in filter selector 111, which is “sketch with text.” In examples, it may be possible to combine any number of effects in virtual camera display 107” such that each video frame now has the second AR option applied and this is provided to the new canvas element of the virtual camera modified frame canvas element to apply the effect and is also sent to the new canvas element of the display target as explained above in paragraphs 0062-0063 teaching “virtual camera 262 may render modified camera frame 308” and “may send modified camera frame 308 to a display processing module. The display processing module may comprise a browser display such as virtual camera display” such that again “modified camera frame 308” is rendered as a new canvas element and furthermore these frames as modified are sent to the new canvas element of the rendered display such as the actual browser display on the user interface); and
causing display, of each video frame of the second subset of the accessed video frames comprising the applied second AR option in the new canvas element, on the user interface of the computing device (see Du, paragraphs 0022-0028 and figures 1A and 1B above teaching “Virtual camera display 107 differs from virtual camera display 106 in that it includes a text effect 116. Further text effect 116 includes a text transliteration of speech from the first user. This may be further seen by the filter selected in filter selector 111, which is “sketch with text.” In examples, it may be possible to combine any number of effects in virtual camera display 107” such that each video frame now has the second AR option applied and this is provided to the new canvas element of the virtual camera modified frame canvas element to apply the effect and is also sent to the new canvas element of the display target as explained above in paragraphs 0062-0063 teaching “virtual camera 262 may render modified camera frame 308” and “may send modified camera frame 308 to a display processing module. The display processing module may comprise a browser display such as virtual camera display” such that again “modified camera frame 308” is rendered as a new canvas element and furthermore these frames as modified are sent to the new canvas element of the rendered display such as the actual browser display on the user interface).
Regarding claim 9, Du teaches all that is required as applied to claim 1 above and further teaches wherein the selection of the virtual camera is detected based on identifying a camera device identifier associated with the virtual camera (see Du, paragraphs 0050-0054 teaching “virtual camera initialization module” which “may be executed prior to virtual camera use in a web application to initialize in-browser camera list 256. Virtual camera initialization module 259 generates in-browser camera list 256 and adds virtual camera 262 to in-browser camera list 256” and “virtual camera initialization module 259 may receive a camera list from operating system kernel 250, for example operating system camera list 252. Virtual camera initialization module 259 may add the operating system camera list 252 to the in-browser camera list” and “Virtual camera initialization module 259 may be executed on startup of browser application 254 or upon initialization of a web application that uses a camera” such that here the virtual camera is added to a list for selection so that it can be identified as a camera device which allows detection of selection from a list identifying the cameras including the virtual camera which is available for use by the browser as in paragraph 0044 and figure 1A teaching “Browser application 254 may include an in-browser camera list 256. In examples, the in-browser camera list 256 may include a combination of operating system camera list 252 and virtual camera 262. In-browser camera list 256 may be selectable by a user within browser application 254. For example, FIG. 1A depicts in-browser camera selectable list 108, which may be used to select a camera from in-browser camera list 256”).
Regarding claim 10, Du teaches all that is required as applied to claim 1 above and further teaches wherein the hardware camera coupled with the computing device is a last hardware camera used by the computing device (note that the claim does not define what the “use” of the last hardware camera was or when “last” refers to such that if any point in the past if a hardware camera was used by the computing device and that hardware camera is coupled with the computing device then the limitation is met; thus see Du, paragraph 0054 teaching “the physical camera that is the default camera in operating system camera list 252 or in-browser camera list 256 may be associated with virtual camera 262 via local browser camera setting 260 at initialization of browser application 254” and as in paragraph 0018 “Upon receiving an indication that a browser is accessing a virtual camera from the in-browser camera list, the method may include receiving a physical camera frame from a physical camera associated with the virtual camera” which means that a default hardware camera coupled with the computing device is the hardware camera and as it is the default camera this means that at least it is the last hardware camera used by the computing device as the default camera; furthermore note that as the system is capable of being used multiple times then at any time after the default camera has been used for the process at least once, then when running the system again this would make that default camera the last hardware camera coupled that was used by the computing device for the more specific purpose of the processing in relation to the virtual camera).
Regarding claims 11-12, and 14-19, the instant claims correspond to a “computer-implemented method comprising” a series of functions to be implemented by a computer where the functions performed correspond to the function performed by the system of claims 1,2, 4-8 and 10, respectively. Du teaches such a computer-implemented method as implemented by the computer system as already addressed in the rejections of claims 1,2, 4-8 and 10 above. In light of this, the limitations of claims 11-12, and 14-19 correspond to the limitations of claims 1,2, 4-8 and 10, respectively; thus they are rejected on the same grounds as claims 1,2, 4-8 and 10, respectively.
Regarding claim 20, the instant claim is recited as an apparatus in the form of a “non-transitory computer-readable storage medium storing instructions that, when executed by at least one processor, cause the at least one processor to perform operations comprising” the operations as addressed in the rejection of the computer system embodiment of claim 1. Du teaches such a non-transitory CRM as already addressed above (see Du, paragraph 0034 “processor 204 may include multiple processors, and memory 206 may include multiple memories. Processor 204 may be in communication with any cameras, sensors, and other modules and electronics of computing device 202. Processor 204 is configured by instructions (e.g., software, application, modules, etc.) to execute a virtual camera. The instructions may include non-transitory computer readable instructions stored in, and recalled from, memory 206” and paragraph 0090 teaching “Note also that the software implemented aspects of the example implementations are typically encoded on some form of non-transitory program storage medium or implemented over some type of transmission medium. The program storage medium may be magnetic (e.g., a floppy disk or a hard drive) or optical (e.g., a compact disk read only memory, or CD ROM), and may be read only or random access. Similarly, the transmission medium may be twisted wire pairs, coaxial cable, optical fiber, or some other suitable transmission medium known to the art. The example implementations not limited by these aspects of any given implementation”). In light of this, the limitations of claim 20 correspond to the limitations of claim 1; thus it is rejected on the same grounds as claim 1.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 3 and 13 is/are rejected under 35 U.S.C. 103 as being unpatentable over Du in view of Nam et al2 (“ Nam”).
Regarding claim 3, Du teaches all that is required as applied to claim 1 above but fails to teach teaches wherein the predefined number of video frames per second depends on a length of time it takes to render the that any effects which are to be applied can be applied within the limits of the processing system for example.
In the same field of endeavor relating to processing video frames and providing graphics processing for an incoming stream of video frames, Nam teaches that it is known to process frames received such that the predefined rate at which processing is being applied to each frame can be adjusted such that the video frames per second that the processing is to be applied at depends on a length of time it takes to render the option (see Nam, paragraphs 0009-0011 teaching “to adjust at least one of a number of objects to be rendered in one frame among the objects included in the animation content and preset target frames per second (FPS) by calculating a rendering time of the objects to be rendered in one frame and comparing the rendering time with the target FPS” and “control unit may be further configured to decrease the target FPS if a time obtained by summing respective rendering times of the objects to be rendered in one frame is greater than a target time corresponding to the target FPS and the number of objects to be rendered in one frame is less than a preset number” and as in paragraph 0016 “control unit may measure a matrix calculation time for rendering with respect to each of the objects and determine the measured time as a rendering time for each of the objects” which is further explained in paragraphs 0051-0053 teaching “calculate a rendering time of objects included in a current window. In this case, the rendering time of the objects may include a time of transmitting a rendering execution command from the animation control unit 120 to a rendering module (not shown) and a time of calculating coordinates, matrices, and the like of the objects to be rendered. The animation control unit 120 may measure a time of calculating a matrix of each of the objects and set the measured time as a rendering time of each of the objects” and “may calculate a rendering time of each of the objects by measuring, using a performance function, a rendering start time point of the objects included in the current window and an end time point of a matrix calculation of each of the objects” and “parameter may indicate how many pixels are moved by an object, how much the object is magnified or reduced, or the like” and as in paragraphs 0055-0057, “control unit 120 may compare the rendering time of the objects included in the current window with target frames per second (FPS). For example, the animation control unit 120 may compare the rendering time of the objects with a target time corresponding to the target FPS. In this case, the target time may be calculated as a reciprocal of the target FPS. For example, if the target FPS is 60, the target time may be about 16 ms” and “If the number of objects included in the current window is less than the preset number, the animation control unit 120 may decrease the target FPS. When the target FPS is decreased, the target time increases, and accordingly, the rendering time of the objects included in the current window may be equal to or less than the adjusted target time”, thus teaching that it is known that when a rendering time will exceed the ability of the system to provide the rendering effect to change the frame rate at which effects are applied). Thus Nam teaches a known technique applicable to the base system of Du which also processes video frames and applies graphics processing options to the stream of video frames.
Therefore it would have been obvious for one of ordinary skill in the art before the effective filing date of the invention to modify Du by applying the known technique of Nam as doing so would be no more than application of a known technique to a base device ready for improvement where the results of the modification would be predictable and would result in an improved system. Here the predictable result of the combination would be that the rate at which AR effects to be applied in Du would be controlled by adjusting a target video frames per second as in Nam such that the rendering time to apply the AR option in Du would be determined as in Nam and if such AR option would exceed the target time then the rate at which the AR effect is applied could be decreased as it is in Nam. This would result in an improved system as it would avoid issues with rendering of AR options taking longer than the system can handle and could avoid skipping of such frames as suggested by Nam (see Nam, paragraph 0065 teaching “when all of the objects (the first to the ninth objects 351 to 359) included in the animation are rendered for each frame, a rendering time is greater than a target time corresponding to target FPS, and thus, all of the objects cannot be rendered within the target time. Accordingly, a frame skip occurs, and in this case, the target FPS decrease, and thus, the performance and the quality of the animation is deteriorated”).
Regarding claim 13, the limitations of claim 13 correspond to the limitations of claim 3; thus it is rejected on the same grounds as claim 3.
Response to Arguments
Applicant's arguments filed 1/6/2026 have been fully considered but they are not persuasive.
Applicant argues on page 7 of “REMARKS” filed 1/9/2026 that Du does not disclose the subject matter of the amendments to claims 1,2, 12, 3, 13, 10, 11 and 20. Applicant does not provide any specific arguments as to the specific teachings of Du nor any arguments with respect to the interpretation of the claim limitations. The Examiner respectfully disagrees that Du does not disclose the subject matter of claims 1, 2, 12, 10, 11 and 20 as amended. As fully explained in the rejections above, Du teaches all of the limitations of claims 1, 2, 12, 10 and 20 as amended, and teaches the aspects of these claims alleged by Applicant to not be disclosed.
Applicant’s argument with respect to claims 3 and 13, that Du does not teach “wherein the predefined number of video frames per second depends on a length of time it takes to render the AR option” is persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of Du and Nam as explained above.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. See Swierk et al (US Patent NO. 11350059, teaching a videoconferencing system in which captured image frames are subjected to AR like effect processing and a computational burden of the effects to be applied can be used to adjust the frame rate setting that the effects are applied with). See also Yang et al (US PGPUB No. 20240320932) teaching (see paragraphs 0048-0071 teaching updating a frame rate at which augmented reality processing is applied to incoming frames based on keeping frames within a target frame rate based on rendering being applied).
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to SCOTT E SONNERS whose telephone number is (571)270-7504. The examiner can normally be reached Mon-Friday 9-5.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Xiao Wu can be reached at (571) 272-7761. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/SCOTT E SONNERS/Examiner, Art Unit 2613
/XIAO M WU/Supervisory Patent Examiner, Art Unit 2613
1 WO 2024/177668, corresponding to Foreign Patent Document 1 in the IDS filed 8/22/2025
2 US PGPUB No. 20160232699