Prosecution Insights
Last updated: April 19, 2026
Application No. 18/922,093

INTELLIGENT SOFTWARE DEVELOPMENT KIT FRAMEWORK FOR ADVANCED MOTION STABILIZATION

Final Rejection §103
Filed
Oct 21, 2024
Examiner
WILSON, DOUGLAS M
Art Unit
2622
Tech Center
2600 — Communications
Assignee
Honeywell International Inc.
OA Round
2 (Final)
75%
Grant Probability
Favorable
3-4
OA Rounds
2y 9m
To Grant
91%
With Interview

Examiner Intelligence

Grants 75% — above average
75%
Career Allow Rate
320 granted / 427 resolved
+12.9% vs TC avg
Strong +16% interview lift
Without
With
+16.1%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
25 currently pending
Career history
452
Total Applications
across all art units

Statute-Specific Performance

§101
1.9%
-38.1% vs TC avg
§103
56.5%
+16.5% vs TC avg
§102
22.5%
-17.5% vs TC avg
§112
14.4%
-25.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 427 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claims 1-20 are pending. Response to Arguments Applicant's arguments filed 16 December 2025 have been fully considered but they are not persuasive. Applicant incorporated a portion of an objected Claim into each of Claims 1, 10, and 19. The subject matter amended into Claims 1, 10, and 19 is taught by Wang (US 2021/0263586) as discussed in detail below. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the Examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the Examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 1-6, 8-15, and 17-20 are rejected under 35 U.S.C. 103 as being unpatentable over Wang (US 2021/0263586) in view of Nasiri (US 2015/0193006). All reference is to Wang unless otherwise indicated. Regarding Claims 1, 10, 19 (Currently Amended), Wang teaches a display motion stabilization apparatus, a computing system, and one or more non-transitory computer-readable storage media comprising a motion stabilization software development kit, comprising: a first set of one or more sensors [fig. 1 @108] configured for generating device motion data for a display device within a vehicle [¶0019, “The display apparatus 102 may be configured to control the first motion sensor 108 to capture a motion signal associated with the display apparatus 102”]; and a second set of one or more sensors [fig. 1 @112] configured for generating vehicle motion data for the vehicle [¶0024, “The second motion sensor 112 may be configured to capture a second motion signal associated with the vehicle 104, where the second motion signal may indicate the physical or kinetic movement, motion, or vibration of the vehicle 104 during the movement of the vehicle 104”]; a motion stabilization model [¶0055, “The prediction of the adjustment of the content 404 may be based on the stored machine learning model 204A”] configured to adjust a position of an object [fig. 4B @412] on a screen [fig. 4B @402] of the display device based on the device motion data and the vehicle motion data [¶0029, “the display apparatus 102 may be configured to adjust the movement of the portion (i.e. where the occupant 118 may be focused) of the displayed content based on a combination of the captured first motion signal and the captured second motion signal being higher than the predefined threshold”] to account for screen motion of the display device [¶0019] and eye motion of a user relative to each other in the vehicle [by measuring screen motion of the display device (¶0019) at the position where user gazes the relative motion between the eye gaze position and the display is corrected] wherein, adjusting the position of the object [fig. 5 @516] comprises: generating predicted gaze position deviation data based on the device motion data and the vehicle motion data [¶0029, “The display apparatus 102 may be further configured to adjust a movement (or motion/vibration) of the portion of the displayed content 120 in response to the captured first motion signal being higher than a predefined threshold. In some embodiments, the display apparatus 102 may be configured to adjust the movement of the portion (i.e. where the occupant 118 may be focused) of the displayed content based on a combination of the captured first motion signal and the captured second motion signal being higher than the predefined threshold”], wherein the predicted gaze position deviation data comprises estimated position change of a gaze of the eye of the user on the screen of the display device [Table 1 teaches adjusting the object position a distance equal to and in the opposite direction of the determined motion signal. The gaze position deviation data is the distance between the zero motion object gaze position and the measured motion object position] Wang does not teach a hardware layer comprising the first set and second set of one or more sensors; an API layer comprising one or more APIs; an application layer comprising one or more applications, the application layer communicatively coupled to the first set of one or more sensors and the second set of one or more sensors via at least a portion of the one or more APIs; and the model resides in a software development kit abstraction layer; Nasiri teaches a hardware layer comprising a first sensor and a second sensor [¶0056, “The device driver layer 62 provides a software interface to the hardware motion sensors 26 and 28 of the device 10”]; an API layer [fig. 4 @56] comprising one or more APIs [¶0036, “system software 54 includes an application programming interface (API) layer 56”]; an application layer comprising one or more applications [¶0036, “The application software layer 52 communicates with system software 54, which manages the resources of the device, including communication between hardware and software components”], the application layer [fig. 4 @52] communicatively coupled to the first set of one or more sensors [¶0056] and the second set of one or more sensors via at least a portion of the one or more APIs [the hardware layer sensors are coupled to the sensor device drivers which are coupled to the API layer (fig. 4 @56)] and a model [construed as the image stabilization API algorithms] resides in a software development kit abstraction layer [¶0037, “A particular API within the API layer 56 can be defined to correspond to one or more motion algorithms, where those corresponding algorithm(s) can be used by an application accessing that API”] Before the application was filed it would have been obvious to one of ordinary skill in the art to incorporate a software and hardware architecture coupling hardware sensors to an API layer then to an application layer where an image stabilization algorithm is coupled to an application layer, as taught by Nasiri, into the display motion stabilization apparatus taught by Wang, that provides a simple application interface (API) to be available for different applications, allowing motion sensor data collection to be more easily defined and used by the user, and allow easier porting and maintenance of a motion sensing design for different hardware requirements, would be desirable in many applications (Nasiri: ¶0007). Regarding Claims 2, 11 and 20 (Original), Wang in view of Nasiri teaches the display motion stabilization apparatus of Claim 1, the computing system of Claim 10, and the one or more non-transitory computer-readable storage media comprising a motion stabilization software development kit of Claim 19, wherein the one or more applications comprise a user application [Nasiri: fig. 4 @66], wherein the motion stabilization software development kit is configured for integration with the user applications to enable implementation of the motion stabilization model with respect to the user application [Nasiri: ¶0050, “Another example of a different API in the API layer 56 is an image stabilization API 72. This API allows an application program to request status of the device 10 relating to high-level image stabilization functions, e.g., as used in a digital still camera or video camera”]. Regarding Claims 3 and 12 (Original), Wang in view of Nasiri teaches the motion stabilization software development kit of Claim 1 and the computing system of Claim 10, wherein the one or more APIs comprise a sensor data API [Nasiri: fig. 4 @70] and a motion stabilization API [Nasiri: fig. 4 @72]. Regarding Claims 4 and 13 (Original), Wang in view of Nasiri teaches the motion stabilization software development kit of Claim 3 and the computing system of Claim 12, wherein the software development kit abstraction layer further comprises: a sensor interface module [Nasiri: construed as sensor device driver ¶0056, “The device driver layer 62 provides a software interface to the hardware motion sensors 26 and 28 of the device 10”], wherein the sensor interface module is configured to (i) receive the device motion data from the first set of one or more sensors [fig. 1 @108], (ii) receive the vehicle motion data from the second set of one or more sensors [fig. 1 @112], and (iii) provide the device motion data and the vehicle motion data, directly or indirectly [Nasiri: ¶0046], to the motion stabilization model [Nasiri: fig. 4 @72]. Regarding Claims 5 and 14 (Original), Wang in view of Nasiri teaches the motion stabilization software development kit of Claim 1 and the computing system of Claim 10, wherein the sensor interface module [Nasiri: fig. 4 @62 (device driver)] is further configured to provide, directly or indirectly, one or more of the device motion data [Nasiri: ¶0051, “The API 74 can allow an application program to simply request that gravity be compensated for in the data, rather than having to perform this compensation itself. Other high-level navigation functions can include applying a navigation Kalman filter to the sensor data, to compensate for error in recorded position when providing continuously-updated information about the position and/or velocity of the device 10”] or the vehicle motion data [alternate limitation not addressed] for rendering on a user interface [Nasiri: fig. 4 @68 (navigation Device)]. Regarding Claims 6 and 15 (Original), Wang in view of Nasiri teaches the motion stabilization software development kit of Claim 1 and the computing system of Claim 10, wherein the first set of one or more sensors [fig. 1 @108] comprise one or more of an accelerometer or a gyroscope [Nasiri: ¶0026]. Regarding Claims 8 and 17 (Original), Wang in view of Nasiri teaches the motion stabilization software development kit of Claim 1 and the computing system of Claim 10, wherein the device motion data comprises one or more of (i) device acceleration motion [¶0028, “The at least first motion sensor 108 … an accelerometer, a gyroscope sensor, or any motion sensor”] or (ii) device angular motion [alternate limitation not considered]. Regarding Claims 9 and 18 (Original), Wang in view of Nasiri teaches the motion stabilization software development kit of Claim 1 and the computing system of Claim 10, wherein the vehicle motion data [¶0028, “… the at least second motion sensor 112 may include at least one of a tilt sensor, an accelerometer, a gyroscope sensor, or any motion sensor”] comprises one or more of (i) vehicle acceleration motion or (ii) vehicle angular motion [alternate limitation not addressed]. Allowable Subject Matter Claims 7 and 16 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the Examiner should be directed to Douglas Wilson whose telephone number is (571)272-5640. The Examiner can normally be reached 1000-1700 EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the Examiner by telephone are unsuccessful, the Examiner’s supervisor, Patrick Edouard can be reached at 571-272-7603. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Douglas Wilson/Primary Examiner, Art Unit 2622
Read full office action

Prosecution Timeline

Oct 21, 2024
Application Filed
Nov 01, 2024
Response after Non-Final Action
Sep 23, 2025
Non-Final Rejection — §103
Dec 16, 2025
Response Filed
Feb 25, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596431
VIRTUAL REALITY CONTENT DISPLAY SYSTEM AND VIRTUAL REALITY CONTENT DISPLAY METHOD
2y 5m to grant Granted Apr 07, 2026
Patent 12596279
ACTIVE MATRIX SUBSTRATE AND A LIQUID CRYSTAL DISPLAY
2y 5m to grant Granted Apr 07, 2026
Patent 12583317
INPUT DEVICE FOR A VEHICLE
2y 5m to grant Granted Mar 24, 2026
Patent 12585480
USE OF GAZE TECHNOLOGY FOR HIGHLIGHTING AND SELECTING DIFFERENT ITEMS ON A VEHICLE DISPLAY
2y 5m to grant Granted Mar 24, 2026
Patent 12579947
DISPLAY DEVICE
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
75%
Grant Probability
91%
With Interview (+16.1%)
2y 9m
Median Time to Grant
Moderate
PTA Risk
Based on 427 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month