Prosecution Insights
Last updated: April 19, 2026
Application No. 18/834,363

INFORMATION PROCESSING APPARATUS

Final Rejection §103
Filed
Jul 30, 2024
Examiner
HALIYUR, PADMA
Art Unit
2639
Tech Center
2600 — Communications
Assignee
Sony Semiconductor Solutions Corporation
OA Round
2 (Final)
87%
Grant Probability
Favorable
3-4
OA Rounds
2y 0m
To Grant
99%
With Interview

Examiner Intelligence

Grants 87% — above average
87%
Career Allow Rate
634 granted / 731 resolved
+24.7% vs TC avg
Moderate +13% lift
Without
With
+12.9%
Interview Lift
resolved cases with interview
Fast prosecutor
2y 0m
Avg Prosecution
24 currently pending
Career history
755
Total Applications
across all art units

Statute-Specific Performance

§101
2.6%
-37.4% vs TC avg
§103
47.3%
+7.3% vs TC avg
§102
28.9%
-11.1% vs TC avg
§112
9.8%
-30.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 731 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This action is in response to Amendments and Remarks filed on 03/31/2026 Application is a 371 of PCT/JP2023/000104 01/06/2022 Application claims a FP date of 02/08/2022 Claims 1 and 14 are independent Claims 1-17 were amended and are pending Response to Arguments The Examiner acknowledges Applicant's amendments and remarks filed on 03/31/2026. They have been fully considered and are persuasive in the part. In view of the amendments made to the Specification and Claim 3, the objections to the Specification and the 35 U.S.C §112 second paragraph have been withdrawn. In view of the amendments made to the claims, the claim interpretation under 35 U.S.C § 112 sixth paragraph has also been withdrawn. The claim amendments were sufficient to overcome the above objections and rejections. With respect to rejection based on prior art, 35 U.S.C 102 and 103, the amendments are not sufficient to overcome the rejections, nor are the Applicant’s arguments persuasive. On page 11, Applicant alleges that Dayana does not expressly or inherently describe “select, based on the identified subject, a set of sensors from the plurality of sensors” as recited in the amended independent claims1. Examiner respectfully disagrees since in ¶0070 Dayana clearly discloses that the sensor data processing engine 402 can process sensor data 430 from one or more sensors which is used to detect trigger events for initiating an object detection process. He also includes that the sensor data 430 can include data from one or more sensors such as gyroscope, an accelerometer, an IMU, an audio sensor, an ambient light sensor, a depth sensor and/or laser range finder and others as listed in ¶0070. Further in ¶0074, Dayana also discloses that sensor data received from the sensor data processing engine can determine whether an object of interest is or is not present in a scene. Further in page 12, Applicant further alleges that Dayana does not describe selection of a set of lenses from a plurality of lenses based on an identification of the object and further alleges that the lenses have been equated to the claimed “sensors”. Examiner respectfully disagrees to this. In the office action posted on 12/31/2025, Examiner has clearly pointed to ¶0071 of Dayana where he discloses that the sensor data processing engine 402 can process the sensor data 430 and provide the processed sensor data to the camera to detect and indicate presence of an object of interest to be detected and in some cases collect and/or coalesce a number of optical and non-optical triggers that can indicate a presence of an object of interest to be detected. In response to Applicant’s allegation on page 12 that Dayana dies not describe the selection of lens from a plurality of lenses, it is noted that the features upon which applicant relies are not recited in the rejected claim(s). Examiner would like to point to ¶0004, where Dayana discloses “selecting (the lens configuration) from a plurality of available lens configuration. Dayana also discloses that the system can additionally or alternatively leverage other data to guide the lens positioning/repositioning and /or assist with the calculation of detection confidences such as searching, depth information, autofocus etc. However, to hasten prosecution, Examiner has brought in a new reference, Lee and a detailed explanation is provided in the following action. Applicant’s arguments towards the rejections of Claim 14, appears to be persuasive. Examiner has brought in a new reference, Lee and a detailed explanation is provided in the following action. Due to the variation in claim scope via amendments a new ground of rejection is proper. In view of the above arguments, Examiner would like to maintain the rejections as detailed in the following action. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-4, 8-10 and 13-17 are rejected under 35 U.S.C. 103 as being unpatentable over Dayana et al. (U.S. Patent Publication Number 2022/0394171 A1) in view of Lee et al. (U. S. Patent Publication Number 2021/0125011 A1). Regarding Claim 1, Dayana discloses an information processing apparatus (Fig 1 – mobile device 102) comprising: an icentral processing unit (CPU) configured to: receiving first sensing information from a plurality of sensors (Dayana in ¶0027 discloses the use of one or more sensors – accelerometers, gyroscopes, IMUs, motion detection sensors and others); identify a subject (¶0057, Dayana discloses that the mobile device 102 is configured to perform object detection; In ¶0058 he also discloses that the front facing camera 104 can capture images; In ¶0069, Dayana discloses that the processing system 400 can include an object detector 408) based on the first sensing information (Dayana in ¶0070 discloses that the sensor processing engine 402 can process sensor data 430 from one or more sensors which is used for initiating an object detection process. He also discloses that the sensor data 430 can include data from one or more sensors); and , based on the identified subject, a set of sensors plurality of sensors, wherein the selected set of sensors acquires second sensing information (In ¶0071, Dayana discloses that the sensor data processing engine 402 can process the sensor data 430 and provide the processed sensor data to the camera initiator 404 which can uses the processed sensor data and image data to indicate the presence of an object of interest to be detected). Dayana discloses using plurality of sensors, however, Dayana fails to clearly disclose select, based on the identified subject, a set of sensors from the plurality of sensors wherein the selected set of sensors acquires second sensing information. Instead in a similar endeavor, Lee discloses select, based on the identified subject, a set of sensors from the plurality of sensors (In Lee’s invention, Lee teaches that it is desirable to use various sensors and devices to accurately identify various objects. He further teaches the use of “sensor integration module 100” as disclosed in Fig 2 and 3. He also discloses in ¶0038 - ¶0049 that the different sensors detect objects on the basis of different methods) wherein the selected set of sensors acquires second sensing information (In ¶0043 and throughout, Lee teaches that to accurately classify objects detected may synchronize and output detected results of plurality of sensors having different operating cycles thereby having an advantage of classifying and identifying objects accurately). Dayana and Lee are combinable because both are related to imaging devices using sensors. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to use sensor integration module as taught by Lee in the imaging module disclosed by Dayana. The suggestion/motivation for doing so would have been to accurately classifying and identifying objects as disclosed by Lee in ¶0043 and throughout. Therefore, it would have been obvious to combine Dayana and Lee to obtain the invention as specified in claim 1. Regarding Claim 2, Dayana in view of Lee discloses wherein the first sensing information includes a plurality of pieces of the first sensing information (Dayana in ¶0027 discloses the use of one or more sensors – accelerometers, gyroscopes, IMUs, motion detection sensors and others), the CPU is further configured to identify the subject based on the plurality of pieces of the first sensing information, and each of the plurality of sensors (Dayana: In ¶0071, Dayana discloses that the sensor data processing engine 402 can process the sensor data 430 and provide the processed sensor data to the camera initiator 404 which can uses the processed sensor data and image data to indicate the presence of an object of interest to be detected. Further in ¶0074, Dayana discloses that “any sensor data” from the sensor data processing engine 402 can be used by the object detector 408 which can use such a data to determine whether an object of interest is or is not present) acquires a respective piece of the plurality of pieces of the first sensing information (Lee: Lee teaches this in Figs 2, 4-5 and in ¶0068 - ¶0078). Regarding Claim 3, Dayana in view of Lee discloses wherein the CPU is further configured to: identify the subject based on specific sensing information as the first sensing information for identifying the subject (Dayana: Dayana discloses this in ¶0060-¶0065 where he discloses where a change in scene can be detected in pixel data when using front facing camera (a lower power camera). He also discloses using front facing camera when motion is detected by an optical motion sensor), and the control section performs the selection to select the set of sensors based on the specific sensing information, wherein the selected set of sensors captures (In ¶0059 he discloses images captured by front facing camera (a lower power camera; In ¶0108, Dayana discloses different camera system can implement capture of one or more images at different processing capabilities such as higher resolution, higher frame rates and power state – this could be interpreted as “specific sensing information” since the specific information is not defined in the claim). Regarding Claim 4, Dayana in view of Lee discloses wherein the plurality of sensors includes an RBG sensor (Dayana: In Fig 4, Dayana discloses the use of RGB camera 420 along with sensor data 420; Lee: Fig 2 - Optical camera 11), the RBG sensor includes a plurality of effective pixels (Dayana: In Figs 2A, 2B, 3A, 3B, 8 and in ¶0063-¶0067 and in ¶0125-¶0127 Dayana discloses the pixel configuration that includes the focus pixels and partial PDAF data. Since, “effective” pixels have not been defined in the claim, Examiner interprets the “focus pixel” and PDAF data as effective pixels. Since there are more than one such configuration, it is clear that Dayana discloses multiple effective pixels) , the specific sensing information, is associated with a set of effective pixels-and a number of the set of effective pixels is less than a number of the plurality of effective pixels-(In ¶0127, Dayana discloses that the camera system can use PDAF data from pixel array 802 and can sample the PDAF data on some lines while skipping most of the lines. He also discloses methods to allow accurate processing and providing reliable results.). Regarding Claim 8, Dayana in view of Lee discloses the plurality of sensor includes an RGB sensor (Dayana: In Fig 4, Dayana discloses the use of RGB camera 420 along with sensor data 420; Lee: Fig 2 - Optical camera 11), and a LiDAR sensor (Lee: Fig 2 – LiDAR sensor 14) and based on the identified subject is a mobile body the CPU is further configured to section as the set of sensors, each of the RGB sensor (Dayana: In ¶0070 he discloses that the sensor data processing engine 402 can process data 430 from one or more sensors which is used to detect events for object detection process) and the LiDAR sensor (Lee: In ¶0041 Lee teaches that the LiDAR sensor may detect an object on the basis of TOF methods) from the plurality of sensors (Lee: In ¶0071-¶0079 and throughout, Lee teaches that the signal processing unit 30 is configured to synchronize the pulse generation unit and output data detected by plurality of sensors on the basis of a sensing period of the plurality of sensors. He teaches this using the detection data C2 from the optical camera is used as it is closest in time to the detection data Ls from the Lidar). Regarding Claim 9, Dayana in view of Lee discloses wherein the control selects an RGB sensor (Dayana: In Fig 4, Dayana discloses the use of RGB camera 420 along with sensor data 420, and in ¶0070 he discloses that the sensor data processing engine 402 can process data 430 from one or more sensors which is used to detect events for object detection process) and a ranging sensor (Lee: LiDAR 14 device detects an object on the basis of TOF method and is therefore a “ranging sensor”) in a case where a distance between the subject identified by the identification processing section and a sensor is less than a predetermined distance (Both the Lidar and radar devices can detect at various distance and not impacted by environmental factors, it is clear that they have the advantage of detecting at short distance and long distance. But is well known to one with ordinary skill in the art that each device operates within a max distance and therefore it is reasonable to interpret Lee’s teaching to disclose that the sensor operates “less than a predetermined distance”). Regarding Claim 10, Dayana in view of Lee discloses wherein the plurality of sensors includes an RGB sensor (Dayana in ¶0027 discloses the use of one or more sensors – accelerometers, gyroscopes, IMUs, motion detection sensors and others), and based on the identified subject is a scenery, CPU is further configured to select, as the set of sensors, the from the plurality of sensors (Dayana: Dayana discloses this in ¶0053 where he discloses the use of image capturing system initiated by object detection process based on one or more triggering events - the flow chart of Fig 11 and in ¶0142). Regarding Claim 13, Dayana in view of Lee discloses wherein the CPU is further configured to determine an interval for acquisition of the second sensing information based on identified subject (Lee: In ¶0043 and throughout, Lee teaches that to accurately classify objects detected may synchronize and output detected results of plurality of sensors having different operating cycles thereby having an advantage of classifying and identifying objects accurately; Lee also teaches this in Figs 2, 4-5 and in ¶0068 - ¶0078). Regarding Claim 14, Dayana discloses an information processing apparatus (Fig 1 – mobile device 102) comprising: a central processing unit (CPU)configured to: receive first sensing information from a set of sensors (Dayana in ¶0070 discloses that the sensor processing engine 402 can process sensor data 430 from one or more sensors which is used for initiating an object detection process. He also discloses that the sensor data 430 can include data from one or more sensors), wherein the first sensing information includes a plurality of pieces of the first sensing information (Further in ¶0070, Dayana discloses “the object detector 408” analyzes an image captured by the camera and generate a detection result which can include an indication, prediction, confidence and/or likelihood of a presence of an object of interest; further in ¶0074, he discloses that the sensor data received from the sensor processing engine 402 detects one or more triggers for object detection and the camara initiator 404 can use any sensor data to determine whether an object of interest is or is not present.); and Dayana fails to clearly disclose a plurality of sensors includes the set of sensors and an undriven sensor, and the set of sensors acquires each of the plurality of pieces of the first sensing information at a respective acquisition timing of a plurality of acquisition timings; determine second sensing information from the undriven sensor based on the plurality of pieces of the first sensing information; and sequentially drive each of the set of sensors the respective acquisition timing. Instead in a similar endeavor, Lee discloses a plurality of sensors includes the set of sensors (In Lee’s invention, Lee teaches that it is desirable to use various sensors and devices to accurately identify various objects. He further teaches the use of “sensor integration module 100” as disclosed in Fig 2 and 3. He also discloses in ¶0038 - ¶0049 that the different sensors detect objects on the basis of different methods) and an undriven sensor, and the set of sensors acquires each of the plurality of pieces of the first sensing information at a respective acquisition timing of a plurality of acquisition timings (In ¶0043, Lee teaches that the plurality of sensors have different operation cycles – which clearly teaches that they are “driven” at different instances and they acquire information at different timing) determine second sensing information from the undriven sensor based on the plurality of pieces of the first sensing information; and sequentially drive each of the set of sensors the respective acquisition timing (In Fig 5 and in ¶0068 - ¶0078 Lee teaches the timing diagram and the operation timing of the different sensors and sequential. Lee also teaches the use of the synchronization pulse P_s. In Fig 5 and in ¶0076, Lee teaches that optical camera 11 may obtain two pieces of detection data during one period of detection data R_s outputted from the radar. Lee teaches that the first detection data C_s (C2) may output, among the two pieces of data that is outputted, at the timing closest to that for the radar 13). Dayana and Lee are combinable because both are related to imaging devices using sensors. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to use sensor integration module as taught by Lee in the imaging module disclosed by Dayana. The suggestion/motivation for doing so would have been to accurately classifying and identifying objects as disclosed by Lee in ¶0043 and throughout. Therefore, it would have been obvious to combine Dayana and Lee to obtain the invention as specified in claim 14. Regarding Claim 15, Dayana in view of Lee discloses wherein the CPU is further configured identify a subjectplurality of pieces of the first sensing information second sensing information (Lee: In Fig 5 and in ¶0068 - ¶0078 Lee teaches the timing diagram and the operation timing of the different sensors and sequential. Lee also teaches the use of the synchronization pulse P_s. In Fig 5 and in ¶0076, Lee teaches that optical camera 11 may obtain two pieces of detection data during one period of detection data R_s outputted from the radar. Lee teaches that the first detection data C_s (C2) may output, among the two pieces of data that is outputted, at the timing closest to that for the radar 13). Regarding Claim 16, Dayana in view of Lee discloses wherein the CPU is further configured to determine the second sensing information based on past sensing information (In Fig 5 and in ¶0068 - ¶0078 Lee teaches the timing diagram and the operation timing of the different sensors and sequential. In Fig 5 and in ¶0077, Lee teaches storing the first-fourth detection data. In ¶0078 he teaches that when plurality of detection data are generated the detection data closest to the detection data becomes the reference and may be outputted as sensing data.). Regarding Claim 17, Dayana in view of Lee discloses wherein the CPU is further configured to determine the second sensing information based on past sensing information regarding the undriven sensor (Lee: Since Lee teaches the use of detection data previous period that are obtained or stored it is clear that Lee teaches the use of past sensing information.) Claims 7 and 11-12 are rejected under 35 U.S.C. 103 as being unpatentable over Dayana et al. (U.S. Patent Publication Number 2022/0394171 A1) in view of Lee et al. (U. S. Patent Publication Number 2021/0125011 A1) as applied to Claim 1 above and further in view of Houck et al. (U. S. Patent Publication Number 2020/0256723). Regarding Claim 7, Dayana in view of Lee discloses wherein the plurality of sensors (Dayana in ¶0027 discloses the use of one or more sensors – accelerometers, gyroscopes, IMUs, motion detection sensors and others) includes an RGB sensor and a spectroscopic sensor (Dayana: In Fig 4, Dayana discloses the use of RGB camera 420 along with sensor data 420, and in ¶0070 he discloses that the sensor data processing engine 402 can process data 430 from one or more sensors which is used to detect events for object detection process) the CPU is further configured to select as the set of sensors, each of the RGB sensor and the spectroscopic sensor from the plurality of sensors (Lee:); Dayana in view of Lee discloses using multiple sensors, however, Dayana fails to clearly disclose spectroscopic sensor Instead in a similar endeavor, Houck discloses spectroscopic sensor (Houck teaches the use of spectrometry analysis including a set of sensor elements to capture information relating to multiple frequencies. Specifically in ¶0014, he teaches that the spectrometry measurements may be useful for analysis for material, moisture content determination, plant health, plant nutrition, human assessment or the like). Dayana, Lee and Houck are combinable because all are related to imaging devices using sensors. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to use spectroscopic to identify information relating to multiple frequencies as taught by Houck in the imaging module disclosed by Dayana in view of Lee. The suggestion/motivation for doing so would have been to use this sensor since spectroscopic sensors detect multiple frequencies. Therefore, it would have been obvious to combine Dayana, Lee and Houck to obtain the invention as specified in claim 7. Regarding Claim 11, Dayana in view of Lee and Houck discloses wherein the first sensing information includes a plurality of pieces of the first sensing information, the plurality of sensors includes and RGB sensor, a ranging sensor, and a spectroscopic sensor (Dayana in ¶0027 discloses the use of one or more sensors – accelerometers, gyroscopes, IMUs, motion detection sensors and others; Lee: Lee teaches that it is desirable to use various sensors and devices to accurately identify various objects. He further teaches the use of “sensor integration module 100” as disclosed in Fig 2 and 3; Houck: Houck teaches the use of spectrometry analysis including a set of sensor elements to capture information relating to multiple frequencies), each of the RGB sensor and the ranging sensor acquires a set of piece of the plurality of pieces of the first sensing information, and the CPU is further configured to: identify the subject as a face of a person based on the set of pieces of the plurality of pieces of the first sensing information (¶0057, Dayana discloses that the mobile device 102 is configured to perform object detection; In ¶0058 he also discloses that the front facing camera 104 can capture images; In ¶0069, Dayana discloses that the processing system 400 can include an object detector 408); and based on the identified subject as the face of the person, the spectroscopic sensor as the set of sensors from the plurality of sensors (Hochman: Since in ¶0014, he teaches that the spectrometry measurements may be useful for analysis for material, moisture content determination, plant health, plant nutrition, human assessment or the like, it is clear that Hochman teaches the use of spectroscopic sensor to detect face of a person; ¶0026; ¶0051; ¶0060). Regarding Claim 12, Dayana in view of Lee and Houck discloses wherein the first sensing information includes a plurality of pieces of the first sensing information, the plurality of sensors includes and RGB sensor, a ranging sensor, and a spectroscopic sensor (Dayana in ¶0027 discloses the use of one or more sensors – accelerometers, gyroscopes, IMUs, motion detection sensors and others; Lee: Lee teaches that it is desirable to use various sensors and devices to accurately identify various objects. He further teaches the use of “sensor integration module 100” as disclosed in Fig 2 and 3; Houck: Houck teaches the use of spectrometry analysis including a set of sensor elements to capture information relating to multiple frequencies), each of the RGB sensor and the ranging sensor acquires a set of piece of the plurality of pieces of the first sensing information, and the CPU is further configured to: identify the subject as a face of a person based on the set of pieces of the plurality of pieces of the first sensing information (¶0057, Dayana discloses that the mobile device 102 is configured to perform object detection; In ¶0058 he also discloses that the front facing camera 104 can capture images; In ¶0069, Dayana discloses that the processing system 400 can include an object detector 408); and based on the identified subject as the face of the person, the ranging sensor as the set of sensors from the plurality of sensors (Houck: In Fig 1 and in ¶0016 - ¶0022, Houck teaches the used of spectrometry measurement based on a ToF measurement of a sensor device and in ¶0023, he also teaches that the processor perform operation based on ToF and/or spectrometry and image sensor and further in ¶0026 he teaches that the sensor may determine a characteristic of a person based on the spectrometry measurement; Lee: In ¶0024, Lee teaches that automotive sensor integration module includes plurality of sensors for detecting objects outside a vehicle and the objects could be a pedestrian in the vicinity of a host vehicle). Allowable Subject Matter Claims 5-6 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to PADMA HALIYUR whose telephone number is (571)272-3287. The examiner can normally be reached Monday-Friday 7AM - 4PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Twyler Haskins can be reached at 571-272-7406. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /PADMA HALIYUR/Primary Examiner, Art Unit 2639 April 4, 2026
Read full office action

Prosecution Timeline

Jul 30, 2024
Application Filed
Dec 17, 2025
Non-Final Rejection — §103
Mar 31, 2026
Response Filed
Apr 04, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12604075
DRIVE APPARATUS, IMAGE STABILIZATION IMAGING APPARATUS, AND TERMINAL
2y 5m to grant Granted Apr 14, 2026
Patent 12591107
OPTICAL ELEMENT DRIVING MECHANISM
2y 5m to grant Granted Mar 31, 2026
Patent 12585137
SHAPE MEMORY ALLOY ACTUATOR ASSEMBLY
2y 5m to grant Granted Mar 24, 2026
Patent 12587744
ACTUATOR ASSEMBLY WITH BEARING ARRANGEMENT AND ELECTRICAL INTERCONNECTOR
2y 5m to grant Granted Mar 24, 2026
Patent 12587743
POSITION DETECTION AND CONTROL OF A MOVABLE BODY INCLUDING AN OPTICAL ELEMENT
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
87%
Grant Probability
99%
With Interview (+12.9%)
2y 0m
Median Time to Grant
Moderate
PTA Risk
Based on 731 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month