Prosecution Insights
Last updated: April 19, 2026
Application No. 18/348,742

Kamera-Assistenzsystem

Non-Final OA §103
Filed
Jul 07, 2023
Examiner
ABDOU TCHOUSSOU, BOUBACAR
Art Unit
2482
Tech Center
2400 — Computer Networks
Assignee
Arnold & Richter Cine Technik GmbH & Co. Betriebs KG
OA Round
1 (Non-Final)
67%
Grant Probability
Favorable
1-2
OA Rounds
2y 5m
To Grant
82%
With Interview

Examiner Intelligence

Grants 67% — above average
67%
Career Allow Rate
294 granted / 436 resolved
+9.4% vs TC avg
Moderate +14% lift
Without
With
+14.2%
Interview Lift
resolved cases with interview
Typical timeline
2y 5m
Avg Prosecution
21 currently pending
Career history
457
Total Applications
across all art units

Statute-Specific Performance

§101
4.1%
-35.9% vs TC avg
§103
52.1%
+12.1% vs TC avg
§102
24.1%
-15.9% vs TC avg
§112
15.2%
-24.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 436 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Election/Restrictions Applicant’s election without traverse of Species I, claims 1-12, 20-27, and 30-34, in the reply filed on 11/11/2025 is acknowledged. Drawings The drawings are objected to under 37 CFR 1.83(a). The drawings must show every feature of the invention specified in the claims. Therefore, the “a virtual three-dimensional projection surface” must be shown or the feature(s) canceled from the claim(s). No new matter should be entered. Corrected drawing sheets in compliance with 37 CFR 1.121(d) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. The figure or figure number of an amended drawing should not be labeled as “amended.” If a drawing figure is to be canceled, the appropriate figure must be removed from the replacement sheet, and where necessary, the remaining figures must be renumbered and appropriate changes made to the brief description of the several views of the drawings for consistency. Additional replacement sheets may be necessary to show the renumbering of the remaining figures. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance. Claim Objections Claim 10 is objected to because of the following informalities: “an image processing unit” should be “the image processing unit”. Appropriate correction is required. Claim 22 is objected to because of the following informalities: claim 22 should depend from claim 21 since antecedent basis for “the focus position set by means of the setting unit” is provided by claim 21 not claim 1. Appropriate correction is required. Claim 34 is objected to because of the following informalities: “an image processing unit” should be “the image processing unit”. Appropriate correction is required. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: an image processing unit which processes a camera image, in claim 1; an imaging sharpness detection unit for determining/calculating the imaging sharpness of the received camera image, in claims 2-6; a setting unit for setting recording parameters of the camera, in claim 20. Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1, 8, 9, 11, 20, 21, 23, 25, 27 and 30-32 is/are rejected under 35 U.S.C. 103 as being unpatentable over Miyake et al (JP 2012203352A) in view of Yamamoto et al (US 20200081608). As to claim 1, Miyake discloses a camera assistance system (FIG. 1) comprising: an image processing unit which processes a camera image of a recording subject received from a camera to generate a useful camera image (see [0029]), wherein a 3D preview image with a virtual three-dimensional projection surface, of which the height values correspond to a local imaging sharpness of the received camera image, is generated (see [0019] and [0054], measuring distances at points other than the center point of the display screen; determining the amount of blur [note that blur is inversely proportional to sharpness] at each of the points; and synthesizing a 3D image in accordance with each of the amounts of blur; see [0033], The degree of blur is expressed to the user through a 3D effect. In other words, the greater the blur, the deeper the 3D depth [i.e. height values]; see [0029], 3D image generation unit 111); and a display unit which displays the 3D preview image (FIG. 1, LCD display; FIG. 2B). Miyake fails to explicitly disclose that the camera image received from the camera is projected onto the virtual three-dimensional projection surface. However, this is a well-known technique for generating 3D images as evidenced by Yamamoto who teaches the camera image received from the camera is projected onto the virtual three-dimensional projection surface (see [0054], The stereoscopic image generation unit 403 generates virtual projection image data by projecting the captured image data acquired by the image acquisition unit 401 onto a virtual projection plane (three-dimensional shape model)). At the time before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skills in the art to modify Miyake using the known technique of projecting the received image onto a virtual three-dimensional projection surface to provide the predictable result of generating a stereoscopic image (Yamamoto; [0054]). As to claim 8, Miyake modified by Yamamoto further discloses wherein the image processing unit calculates a stereo image pair on the basis of the camera image projected onto the virtual three-dimensional projection surface, said stereo image pair being displayed on a 3D display unit of the camera assistance system (see [0029] and [0046], stereoscopic image displayed on display 101). As to claim 9, the combination of Miyake and Yamamoto further discloses wherein the image processing unit calculates a pseudo-3D illustration with artificially generated shadows or an oblique view on the basis of the camera image projected onto the virtual three-dimensional projection surface, which illustration is displayed on a 2D display unit of the camera assistance system (Yamamoto; [0054]-[0055]) As to claim 11, Miyake modified by Yamamoto further discloses wherein the useful camera image generated by the image processing unit is stored in an image memory (see [0029], storage unit 106). As to claim 20, Miyake modified by Yamamoto further discloses wherein a setting unit is provided for setting recording parameters of the camera (see [0029], touch panel 102; see [0036], [0038], [0040]). As to claim 21, Miyake modified by Yamamoto further discloses wherein the recording parameters which can be set by means of the setting unit of the camera assistance system comprise a focus position, an iris diaphragm opening, and a focal length of a camera lens of the camera, as well as an image recording frequency and a shutter speed (see [0036], [0038], [0040], [0045]). As to claim 23, Miyake modified by Yamamoto further discloses wherein a viewpoint on the camera image which is projected onto the virtual three-dimensional projection surface and is displayed on the display unit of the camera assistance system can be set (see [0021]). As to claim 25, Miyake modified by Yamamoto further discloses wherein the image processing unit ascertains an instantaneous depth of field on the basis of a set iris diaphragm opening, a set focus position and/or a set focal length of the currently used camera lens of the camera (see [0029], depth-of-field calculation unit 109). As to claim 27, Miyake modified by Yamamoto further discloses wherein the image processing unit receives a type of camera lens of the camera communicated via an interface and ascertains the instantaneous depth of field from an associated stored depth of field table of the camera lens type on the basis of the set iris diaphragm opening and the set focus position and/or the set focal length of the currently used camera lens (see [0029], 3D lens and depth-of-field calculation unit 109; see [0017], conversion table). As to claim 30, Miyake modified by Yamamoto discloses a camera comprising a camera assistance system as claimed in claim 1 for assisting in the focusing of the camera (FIG. 1 and [0029]; see rejection of claim 1). As to claim 31, Miyake modified by Yamamoto discloses wherein the camera is a moving image camera (see [0023]). As to claim 32, Miyake discloses a method for assisting in the focusing of a camera comprising the steps of: receiving a camera image of a recording subject by an image processing unit from the camera (see [0029]); generating a 3D preview image with a virtual three-dimensional projection surface, of which the height values correspond to a local imaging sharpness of the received camera image (see [0019] and [0054], measuring distances at points other than the center point of the display screen; determining the amount of blur [note that blur is inversely proportional to sharpness] at each of the points; and synthesizing a 3D image in accordance with each of the amounts of blur [i.e. camera image projected onto a virtual three-dimensional projection surface]; see [0033], The degree of blur is expressed to the user through a 3D effect. In other words, the greater the blur, the deeper the 3D depth [i.e. height values]; see [0017], [0029], [0031]); and displaying the 3D preview image on a display unit (FIG. 1, LCD display; FIG. 2B and [0054]). Miyake fails to explicitly disclose projecting the received camera image by the image processing unit onto a virtual three-dimensional projection surface. However, this is a well-known technique for generating 3D images as evidenced by Yamamoto who teaches projecting the received camera image by the image processing unit onto a virtual three-dimensional projection surface (see [0054], The stereoscopic image generation unit 403 generates virtual projection image data by projecting the captured image data acquired by the image acquisition unit 401 onto a virtual projection plane (three-dimensional shape model)). At the time before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skills in the art to modify Miyake using the known technique of projecting the received image onto a virtual three-dimensional projection surface to provide the predictable result of generating a stereoscopic image (Yamamoto; [0054]). Claim(s) 2-6, 12 and 33 is/are rejected under 35 U.S.C. 103 as being unpatentable over Miyake et al (JP 2012203352A) in view of Yamamoto et al (US 20200081608) in view of Kim et al (US 20240223885). As to claim 2, the combination of Miyake and Yamamoto fails to explicitly disclose wherein the local imaging sharpness of the received camera image is determined by means of an imaging sharpness detection unit of the camera assistance system. However, Kim teaches wherein the local imaging sharpness of the received camera image is determined by means of an imaging sharpness detection unit of the camera assistance system (see [0006]-[0007] and [0084]-[0085]). At the time before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skills in the art to modify the combination of Miyake and Yamamoto using Kim’s teachings to determine the local imaging sharpness of the received camera image by means of an imaging sharpness detection unit of the camera assistance system in order to prevent excessive computation by performing an optimal focus search (Kim; [0085]). As to claim 3, the combination of Miyake, Yamamoto and Kim further discloses wherein the imaging sharpness detection unit has a contrast detection unit or a phase detection unit (Kim; [0084]-[0085]). As to claim 4, the combination of Miyake, Yamamoto and Kim further discloses wherein the imaging sharpness detection unit of the camera assistance system calculates the local imaging sharpness of the received camera image in dependence upon at least one focus metric (Kim; [0084]-[0085]). As to claim 5, the combination of Miyake, Yamamoto and Kim further discloses wherein the imaging sharpness detection unit calculates the imaging sharpness of the received camera image using a contrast value-based focus metric on the basis of ascertained local contrast values of the unprocessed camera image received from the camera and/or on the basis of ascertained local contrast values of the processed useful camera image generated therefrom by the image processing unit (Kim; [0084]-[0085]). As to claim 6, the combination of Miyake, Yamamoto and Kim further discloses wherein the image sharpness detection unit ascertains the local contrast values of the two-dimensional camera image received from the camera and/or of the two-dimensional useful camera image generated therefrom, in each case for individual pixels of the camera image or in each case for a group of pixels of the camera image (Kim; [0084]-[0085]). As to claim 12, the combination of Miyake and Yamamoto fails to explicitly disclose wherein the image processing unit executes a recognition algorithm for recognizing significant object parts of the recording subject contained in the received camera image and requests corresponding image sections within the camera image with increased resolution from the camera via an interface. However, Kim teaches wherein the image processing unit executes a recognition algorithm for recognizing significant object parts of the recording subject contained in the received camera image and requests corresponding image sections within the camera image with increased resolution from the camera via an interface (see [0096], [0099]). At the time before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skills in the art to modify Miyake using Kim’s teachings to execute a recognition algorithm for recognizing significant object parts of the recording subject contained in the received camera image and request corresponding image sections within the camera image with increased resolution from the camera via an interface in order to prevent excessive computation by performing an optimal focus search (Kim; [0085]). As to claim 33, the combination of Miyake and Yamamoto fails to explicitly disclose wherein the imaging sharpness of the received camera image is calculated in dependence upon a focus metric. However, Kim teaches wherein the imaging sharpness of the received camera image is calculated in dependence upon a focus metric (see [0006]-[0007] and [0084]-[0085]). At the time before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skills in the art to modify Miyake using Kim’s teachings to calculate the imaging sharpness of the received camera image in dependence upon a focus metric in order to prevent excessive computation by performing an optimal focus search (Kim; [0085]). Claim(s) 7 is/are rejected under 35 U.S.C. 103 as being unpatentable over Miyake et al (JP 2012203352A) in view of Yamamoto et al (US 20200081608) in view of Matsumoto et al (US 20080278618). As to claim 7, the combination of Miyake and Yamamoto fails to explicitly disclose wherein the camera image received from the camera is filtered by a spatial frequency filter in order to reduce fragmentation of the camera image which is displayed on the display unit and projected onto the virtual projection surface. However, Matsumoto teaches wherein the camera image received from the camera is filtered by a spatial frequency filter in order to reduce fragmentation of the camera image which is displayed on the display unit and projected onto the virtual projection surface (see [0264]-[0265]). At the time before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skills in the art to modify the combination of Miyake and Yamamoto using Matsumoto’s teachings to filter the camera image received from the camera by a spatial frequency filter in order to filter high-frequency components contained in the image (Matsumoto; [0265]). Allowable Subject Matter Claims 10, 22, 24, 26 and 34 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to BOUBACAR ABDOU TCHOUSSOU whose telephone number is (571)272-7625. The examiner can normally be reached M-F 8am-4pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Chris Kelley can be reached at 5712727331. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /BOUBACAR ABDOU TCHOUSSOU/Primary Examiner, Art Unit 2482
Read full office action

Prosecution Timeline

Jul 07, 2023
Application Filed
Feb 12, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12604072
CAMERA AND INFRARED SENSOR SHUTTER
2y 5m to grant Granted Apr 14, 2026
Patent 12587755
VEHICLE-MOUNTED CONTROL DEVICE, AND THREE-DIMENSIONAL INFORMATION ACQUISITION METHOD
2y 5m to grant Granted Mar 24, 2026
Patent 12587724
DIGITALLY ENHANCED MICROSCOPY FOR MULTIPLEXED HISTOLOGY
2y 5m to grant Granted Mar 24, 2026
Patent 12574509
METHOD AND APPARATUS FOR ENCODING/DECODING VIDEO AND METHOD FOR TRANSMITTING BITSTREAM
2y 5m to grant Granted Mar 10, 2026
Patent 12574476
VEHICULAR VISION SYSTEM
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
67%
Grant Probability
82%
With Interview (+14.2%)
2y 5m
Median Time to Grant
Low
PTA Risk
Based on 436 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month