DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Objections
Applicant is advised that should claim 4 be found allowable, claim 14 will be objected to under 37 CFR 1.75 as being a substantial duplicate thereof. When two claims in an application are duplicates or else are so close in content that they both cover the same thing, despite a slight difference in wording, it is proper after allowing one claim to object to the other as being a substantial duplicate of the allowed claim. See MPEP § 608.01(m).
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claim(s) 1-5, 7-8 and 10-19 is/are rejected under 35 U.S.C. 102(a)(2) as being anticipated by U.S. Patent Application Publication 2025/0203194 A1 to Ikeda.
With respect to claim 1 Ikeda discloses, in Fig. 1-29, a method, comprising: receiving, by an operating system running on a computing device, first image data from a physical camera of the computing device (paragraph 88-89), the operating system being associated with one or more applications (Fig. 21 and paragraph 108 and 269; where the method can be done of various devices such as a PC); identifying, by the operating system, a virtual camera control to be performed on the first image data, the virtual camera control configured to represent a physical camera control of the physical camera (paragraph 115; where a clipped image region is determined which corresponds to a virtual camera field of view from the physical camera field of view); generating, by the operating system, manipulated image data based at least in part on the first image data and the identified virtual camera control (paragraph 208); and providing, by the operating system, at least a portion of the manipulated image data to a particular application of the one or more applications (paragraph 121-122; where the particular application is the one to set the next camera parameters).
With respect to claim 2 Ikeda discloses, in Fig. 1-29, the method of claim 1, further comprising: storing, by the operating system of the computing device, the virtual camera control as one or more settings associated with the computing device (paragraph 158-159; where new parameters are calculated and thus store (at least temporarily) for them to be used); receiving, by the operating system, second image data; applying, by the operating system, settings corresponding to the identified virtual camera control to the second image; generating, by the operating system, second manipulated image data based at least in part on the second image data and the identified virtual camera control; and providing, by the operating system, at least a portion of the second manipulated image data to the particular application (paragraph 110 and 118; where the process is continuously repeated and thus a “second image” is generated just like the previous one).
With respect to claim 3 Ikeda discloses, in Fig. 1-29, the method of claim 1, wherein the first image data comprises video data associated with a live video stream captured by the physical camera (paragraph 77 and 114).
With respect to claim 4 and 14 Ikeda discloses, in Fig. 1-29, the method of claim 1, wherein the virtual camera control comprises an image correction operation (paragraph 161-165).
With respect to claim 5 Ikeda discloses, in Fig. 1-29, the method of claim 1, wherein the first image data comprises a full sensor readout characterized by a first resolution (Fig. 1 and paragraph 76 and 89; where the captured image 20 is a preclipped image and thus “a full sensor readout” as it is the unaltered image output from the camera, it would inherently have a corresponding resolution), and wherein generating the manipulated data further comprises: receiving, by a virtual camera of the operating system, a signal corresponding to the virtual camera control from the operating system; performing, by the virtual camera of the operating system, the virtual camera control by altering at least a portion of the full sensor readout; and generating, by the virtual camera of the operating system, the manipulated image data from the altered portion of the full sensor readout, the manipulated image data characterized by a second resolution, less than the first resolution (paragraph 116 and 156; where the manipulated image is a portion of the image from the camera, and thus has a smaller resolution).
With respect to claim 7 Ikeda discloses, in Fig. 1-29, the method of claim 1, wherein a corrective operation comprises at least one of a motion control or a distortion correction (paragraph 168 and 172).
With respect to claim 8 Ikeda discloses, in Fig. 1-29, the method of claim 1, wherein performing the virtual camera control comprises: receiving, by the operating system, a user input corresponding to the virtual camera control; and performing, by the operating system, the virtual camera control on the first image data based at least in part on the user input (paragraph 447).
With respect to claim 10 Ikeda discloses, in Fig. 1-29, the method of claim 1, wherein the particular application is executed on the computing device (paragraph 237).
With respect to claim 11 Ikeda discloses, in Fig. 1-29, the method of claim 1, wherein the particular application is executed on a second computing device, and wherein providing the manipulated image data to the particular application comprises transmitting the manipulated image data to the second computing device (paragraph 279 and Fig. 19).
Claims 12-13 are rejected for similar reasons as claims 1 and 3 as they are corresponding apparatus claims to those of methods claims 1 and 3 respectively and as Ikeda discloses the method may be operated by a processor/memory embodiment in at least paragraph 369.
Claims 15-19 are rejected for similar reasons as claims 1-5 as they are corresponding program claims to those of methods claims 1 and 3 respectively and as Ikeda discloses the method may be operated by a program on in a processor/memory embodiment in at least paragraph 369.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 6, 9 and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Patent Application Publication 2025/0203194 A1 to Ikeda.
With respect to claim 6 Ikeda discloses, in Fig. 1-29, the method of claim 1, wherein the identifying the virtual camera control comprises: detecting, by the operating system, an object within the image data; determining, by the computing device, whether to include the object within an output frame of the image data; and in accordance with a determination to include the object within the output frame, and wherein the manipulated image data is further transformed (paragraph 143-145; where face or other regions can be detected as objects and the clipping region set to encompass them).
Ikeda does not expressly disclose the including of the object in the region includes centering the object in the clipping region and centering, by the operating system, the object within a threshold amount of a center of the output frame; though some examples do appear to be centered such as Fig. 4 and 10A.
However, Official Notice (MPEP § 2144.03) is taken that both the concepts and advantages of centering a person in a clipped area and where the centering is done within an acceptable amount (i.e. threshold amount) are well known and expected in the art. Before the effective filing date of the claimed invention, it would have been obvious to one with ordinary skill in the art to have includes a centering option for the detected subject in Ikeda as it would merely be use of a known technique for determining clipped regions of interest to improve a similar device, such as the one disclosed by Ikeda, in the same way.
Claim 9 is rejected for similar reasons as claim 6 above as Ikeda discloses an embodiment where the camera is moved as a PTZ camera based on the desired clipping region (Fig. 15 and paragraph 212) which therefore would teach wherein performing the virtual camera control comprises automatically centering an object in a field of view of the physical camera.
Claims 20 is rejected for similar reasons as claim 6 as it is a corresponding program claim to that of method claim 6 and as Ikeda discloses the method may be operated by a program on in a processor/memory embodiment in at least paragraph 369.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to DANIEL M PASIEWICZ whose telephone number is (571)272-5516. The examiner can normally be reached M-F 9 AM - 5:30 PM EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, George Eng can be reached at (571)272-7495. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/DANIEL M PASIEWICZ/Primary Examiner, Art Unit 2699
January 14, 2026