Prosecution Insights
Last updated: April 19, 2026
Application No. 18/963,044

IMAGE CAPTURING APPARATUS AND METHOD

Non-Final OA §103
Filed
Nov 27, 2024
Examiner
DAGNEW, MEKONNEN D
Art Unit
2638
Tech Center
2600 — Communications
Assignee
Hanwha Vision Co., Ltd.
OA Round
1 (Non-Final)
83%
Grant Probability
Favorable
1-2
OA Rounds
2y 6m
To Grant
99%
With Interview

Examiner Intelligence

Grants 83% — above average
83%
Career Allow Rate
604 granted / 728 resolved
+21.0% vs TC avg
Strong +16% interview lift
Without
With
+15.8%
Interview Lift
resolved cases with interview
Typical timeline
2y 6m
Avg Prosecution
29 currently pending
Career history
757
Total Applications
across all art units

Statute-Specific Performance

§101
4.5%
-35.5% vs TC avg
§103
63.7%
+23.7% vs TC avg
§102
21.5%
-18.5% vs TC avg
§112
6.3%
-33.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 728 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Specification The title of the invention is not descriptive. A new title is required that is clearly indicative of the invention to which the claims are directed. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-4 are rejected under 35 U.S.C. 103 as being unpatentable over Hirose (US 20140362276 A1) in view of JEONG (US 20180154523 A1; hereafter JEONG ) As of Claim 1: Hirose teaches an image capturing apparatus (¶0029 exhibit digital still camera 100) comprising: an imager (¶0031, an image sensor 107) comprising a focus lens (¶0031, A third lens group 105 is a focusing lens), the imager configured to generate an image of a subject (¶¶00 and note); a distance determiner configured to determine a distance to the subject; a reference position calculator configured to calculate a reference position of the focus lens based on the distance to the subject (¶¶0100, 0106); and a controller configured to adjust a focal position of the focus lens in a direction of the reference position (¶¶0100, 0106 and note FIG. 14 shows the relationship between focus driving FD[n] that starts at the time tn, a focusing lens position FP[n] obtained at the time tn, and a focusing lens position FP[n-1] and a defocus amount Def[n-1] at a time tn-1. It can be understood from FIG. 14 that in the focus driving FD[n] carried out at the time tn, the defocus amount Def[n-1] needs to be corrected by an amount that corresponds to the difference between focal positions (FP[n]-FP[n-1]). In other words, if the focusing lens position at the accumulation timing of the image signal that was used to calculate the defocus amount during focus lens driving (referred to hereinafter as the "reference lens position") is not known, error will arise in the focus driving amount.), wherein the controller is further configured to determine whether any error is reflected in the reference position based on a calculated sharpness of the image corresponding to the focal position of the focus lens being at the reference position (¶¶0088,0089,0106 and note the CPU 121 calculates the sharpness of the pair of image signals stored in the internal memory and stores the sharpness as sharpness information corresponding to the image signals. For example, letting S.sub.HA[n] and S.sub.HB[n] (n=0, 1, 2 . . . nMAX) be the respective pieces of signal data from the pixels for focus-detection that make up the pair of image signals, the sharpness (indicated by "Sharpness") can be calculated using the following equations.). JEONG is a similar or analogous system to the claimed invention as evidenced JEONG teaches output a signal indicating a malfunction of the robot arm in response to the measured distance being outside a distance error range, and an image measurement value of the obtained image being outside an image error range that would have prompted a predictable variation of Hirose by applying JEONG’s known principal of In view of the motivations such as thereby further improving image quality one of ordinary skill in the art would have implemented the claimed variation of the prior art system of Hirose. Therefore, the claimed invention would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention. As of Claim 2: Hirose in view of JEONG further teaches the controller is further configured to adjust the focal position of the focus lens in pre-set increments in the direction of the reference position (Hirose ¶0090 and note that FIG. 13 shows how the focusing lens 105 moves closer to the in-focus position each time a certain amount of time has elapsed. The elapsed time of image signal readout is shown at the top in this figure. Also, Ts indicates a predetermined cycle of image signal readout, n indicates the readout cycle of the current frame, and nTs indicates the time of the current frame.). As of Claim 3: Hirose in view of JEONG further teaches the controller is further configured to adjust the focal position of the focus lens during time intervals between adjacent image frames among a plurality of image frames included in the image (Hirose ¶¶0090, 0106). As of Claim 4: Hirose in view of JEONG further teaches a sharpness calculator configured to calculate a sharpness of an image frame corresponding to the focal position of the focus lens whenever each of the plurality of image frames is generated (Hirose ¶¶0106-0107). Claims 5-11 are rejected under 35 U.S.C. 103 as being unpatentable over Hirose (US 20140362276 A1) in view of JEONG (US 20180154523 A1; hereafter JEONG), and further in view of Ishtiaq et al. (US 20210360233 A1; hereafter Ishtiaq). As of Claim 5:Ishtiaq is a similar or analogous system to the claimed invention as evidenced Ishtiaq teaches output a signal indicating a malfunction of the robot arm in response to the measured distance being outside a distance error range, and an image measurement value of the obtained image being outside an image error range that would have prompted a predictable variation of Hirose by applying Ishtiaq’s known principal of the controller is further configured to move the focus lens during a time interval between when a sharpness of a previous image frame is calculated and when a sharpness of a subsequent image frame is calculated(¶¶0018,0030,0035 and note that the variation of frame level video characteristics with respect to the previous frames and subsequent frames may then be determined. Also, frame may comprise a feature vector indicative of the one or more characteristics. The aggregation may be based on a mathematical aggregation comprising at least one of: mean, standard deviation, count, or skew). In view of the motivations such as iterative approaches may be used for determining optimal bit rates thereby further improving image quality one of ordinary skill in the art would have implemented the claimed variation of the prior art system of Hirose. Therefore, the claimed invention would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention. As of Claim 6: Hirose in view of JEONG in view of Ishtiaq further teaches the controller is further configured to stop adjusting a position of the focus lens based on determining that a calculated sharpness of the image increases and then decreases (Hirose ¶¶0088,0105). As of Claim 7: Hirose in view of JEONG in view of Ishtiaq further teaches the controller is further configured to determine that the error is reflected in the reference position based on the calculated sharpness of the image, corresponding to the focal position of the focus lens being at the reference position, being outside a threshold range (Hirose ¶¶0106-0107). As of Claim 8: Hirose in view of JEONG in view of Ishtiaq further teaches a target position calculator configured to, based on determining that the error is reflected in the reference position, calculate a target position of the focus lens based on a pattern of sharpness changes corresponding to an adjustment of the focal position of the focus lens (Hirose ¶¶0088-0090 and note that the CPU 121 calculates the sharpness of the pair of image signals stored in the internal memory and stores the sharpness as sharpness information corresponding to the image signals. For example, letting S.sub.HA[n] and S.sub.HB[n] (n=0, 1, 2 . . . nMAX) be the respective pieces of signal data from the pixels for focus-detection that make up the pair of image signals, the sharpness (indicated by "Sharpness") can be calculated using the following equations. Sharpness_sa = n = 0 n MAX - 1 ( S HA [ n ] - S HA [ n + 1 ] ) 2 / n = 0 n MAX - 1 S HA [ n ] - S HA [ n + 1 ] ( 1 ) Sharpness_sb = n = 0 nMAX - 1 ( S HB [ n ] - S HB [ n + 1 ] ) 2 / n = 0 nMAX - 1 S HB [ n ] - S HB [ n + 1 ] ( 2 ) Sharpness = ( Sharpness_sa + Sharpness_sb ) / 2 ( 3 ). Also, ¶0089 Note that the sharpness may be calculated using another method. ¶0089 and note the CPU 121 reads out multiple frames worth of signal data from the pixels for focus-detection and performs addition processing. The focus detection operation can be performed in parallel with the movement of the focusing lens 105 to the in-focus position, and FIG. 13 shows how the focusing lens 105 moves closer to the in-focus position each time a certain amount of time has elapsed. The elapsed time of image signal readout is shown at the top in this figure. Also, Ts indicates a predetermined cycle of image signal readout, n indicates the readout cycle of the current frame, and nTs indicates the time of the current frame). As of Claim 9: Hirose in view of JEONG in view of Ishtiaq further teaches the target position calculator is further configured to form a sharpness graph using corresponding points between multiple selected focal positions of the focus lens and sharpness values calculated for the multiple selected focal positions, respectively, and calculate the target position of the focus lens based on the sharpness graph (Hirose ¶¶0096-0097 and note that imaging optical system that corresponds to the accumulation period of multiple addition target frames (i.e., the reference lens position) is calculated according to the ratio of the sharpness of each image signal to the total sharpness of the image signals to be added. Alternatively, the reference lens position that corresponds to the added image signals is calculated by performing weighted addition on the focusing lens positions that correspond to the respective image signals, giving a higher weight the higher the sharpness of the image signal is.). As of Claim 10: Hirose in view of JEONG in view of Ishtiaq further teaches the target position calculator is further configured to set the focal position of the focus lens corresponding to a maximum value of the sharpness graph as the target position (Hirose ¶¶0106-0107 and note that the n value gets set to get the maximum in the equation performing weighted addition on the focusing lens positions that correspond to the respective image signals, giving a higher weight the higher the sharpness of the image signal is. As one example, in the case of adding the image signals from the n-th to (n-2)-th frames, a reference lens position FP[n,n-1,n-2] is obtained for the image signals of the added frames using the following equation. Here, Sharpness[n] is the sharpness of the image signals of the n-th frame. FP[n,n-1,n-2]=(Sharpness[n].times.FP[n]+Sharpness[n-1].times.FP[n-1]+Sha- rpness[n-2].times.FP[n-2]) . . . (Sharpness[n]+Sharpness[n-1]+Sharpness[n-2]) (4) [0107] The relationship between the reference lens position calculated using Equation 4). As of Claim 11: Hirose in view of JEONG in view of Ishtiaq further teaches the controller is further configured to designate a region within an imaging area of the imager that comprises the subject as an exclusion area where use of an artificial intelligence (AI) model is excluded, based on determining that a difference between the reference position of the focus lens, calculated using the AI model, and the target position of the focus lens, calculated using the sharpness graph, exceeds a predetermined threshold (Ishtiaq ¶¶0034,0048 and note that A machine learning algorithm may be used to train the machine learning model. Machine learning algorithms that may be used for training may include but are not limited to: decision trees, support vector machines, k-nearest neighbors, artificial neural networks (e.g., artificial neural networks based on a long short-term memory (LSTM) artificial recurrent neural network (RNN) architecture), or Bayesian networks). As of Claims 12-19: Claims 12-19 are an image capturing method by an image capturing apparatus of Claims 1-11 and are addressed above. As of Claim 20: all the limitations are addressed in claim 1. Moreover, Hirose teaches a non-transitory computer readable medium comprising computer code configured to, when executed by at least one processor of (¶0111). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to MEKONNEN D DAGNEW whose telephone number is (571)270-5092. The examiner can normally be reached on 8:00AM-5:00PM M-Th. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Lin Ye can be reached on 571-272-7372. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see https://ppair-my.uspto.gov/pair/PrivatePair. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MEKONNEN D DAGNEW/Primary Examiner, Art Unit 2638
Read full office action

Prosecution Timeline

Nov 27, 2024
Application Filed
Feb 21, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12593143
SOLID-STATE IMAGING DEVICE
2y 5m to grant Granted Mar 31, 2026
Patent 12586142
IMAGE CAPTURING METHOD AND DISPLAY METHOD FOR RECOGNIZING A RELATIONSHIP AMONG A PLURALITY OF IMAGES DISPLAYED ON A DISPLAY SCREEN
2y 5m to grant Granted Mar 24, 2026
Patent 12585173
LENS BARREL
2y 5m to grant Granted Mar 24, 2026
Patent 12581022
DATA CREATION METHOD AND DATA CREATION PROGRAM
2y 5m to grant Granted Mar 17, 2026
Patent 12574662
THRESHOLD VALUE DETERMINATION METHOD, THRESHOLD VALUE DETERMINATION PROGRAM, THRESHOLD VALUE DETERMINATION DEVICE, PHOTON NUMBER IDENTIFICATION SYSTEM, PHOTON NUMBER IDENTIFICATION METHOD, AND PHOTON NUMBER IDENTIFICATION PROCESSING PROGRAM
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
83%
Grant Probability
99%
With Interview (+15.8%)
2y 6m
Median Time to Grant
Low
PTA Risk
Based on 728 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month