Prosecution Insights
Last updated: April 19, 2026
Application No. 18/504,876

SYSTEMS AND METHODS FOR APPLYING BEHAVIORAL-BASED PARENTAL CONTROLS FOR MEDIA ASSETS

Final Rejection §103
Filed
Nov 08, 2023
Examiner
REYNOLDS, DEBORAH J
Art Unit
2400
Tech Center
2400 — Computer Networks
Assignee
Adeia Guides Inc.
OA Round
6 (Final)
67%
Grant Probability
Favorable
7-8
OA Rounds
2y 5m
To Grant
80%
With Interview

Examiner Intelligence

Grants 67% — above average
67%
Career Allow Rate
111 granted / 166 resolved
+8.9% vs TC avg
Moderate +14% lift
Without
With
+13.6%
Interview Lift
resolved cases with interview
Typical timeline
2y 5m
Avg Prosecution
80 currently pending
Career history
246
Total Applications
across all art units

Statute-Specific Performance

§101
6.9%
-33.1% vs TC avg
§103
47.6%
+7.6% vs TC avg
§102
19.1%
-20.9% vs TC avg
§112
17.9%
-22.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 166 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant’s arguments with respect to claims 35-36, 40, 45-46, 50-53, 55-59, 61-62 have been considered but are moot as discussed in new ground of rejection below. Applicant argues Wheatly uses a user’s own behavior indicated in a user profile to determine that user’s expected response to a deviation portion; the user’s behavior in Wheatley does not determine the deviation itself. Shimy describes storing actions performed on by a user on a device is not the same as detecting physical gestures from a user on a video recording. Jiang describes monitoring computing activity on an endpoint device to identify baseline behaviors that indicate expected behavior patterns of the user of the device and detecting a deviation from the baseline behavior of the user of that device. Determining baseline behaviors and a behavioral deviation using user physical gestures from a video recording (as in amended claims 35 and 45) is not the same as determining baseline behaviors and a behavioral deviation using computing activity on an endpoint device because physical gestures from a video recording are distinct from device computing device. Finally, nothing in Wheatley, Jiang, Archibong, Michelsen, Kanna, or Blong describes jumping as matching an abnormal behavior model and screaming as a deviation from user baseline behavior (page 8). This argument is respectfully traversed. It is noted that non-functional descriptive material does not patentably distinguish over prior art that otherwise renders the claims unpatentable. See for example, MPEP 2111.05, MPEP 2112.01(III). See also In re Ngai, 367 F.3d 1336, 1339 (Fed. Cir. 2004); Exparte Nehls, 88 USPQ2d 1883, 1887-90 (BPAI 2008) (precedential) (discussing cases pertaining to non-functional descriptive material) see also BPAI’s decision in Appeal 2009-010851 (for Ser. No. 10/622,876) or BPAI’s decision in Appeal 2011-011929 (for Ser. No. 11/709,170), pages 6-7. In this case, a particular type of gesture/action such as “jumping”, “screaming” could be considered as non-functional descriptive material and are not required to give patentable weight because these particular types of action/gesture do not functionally change the structure or operation of a system for applying a parental control restriction for media asset based on matching gesture/action of user and action of character. The limitations of “jumping” “screaming” are only given patentable weight of type of gesture/action. Although non-functional descriptive material are not required to be considered, all claim limitations including non-functional descriptive material are known by prior art. For example, Mickelsen (paragraph 0089) or Blong (paragraphs 0043, 0046) discloses the action/gesture comprises screaming, shouting, cheering. Blong also discloses the gesture comprises user has jumped out of the user’s seat, is jumping around, is ruing around (paragraph 0048). With respect to Applicant’s argument that Wheatley uses a user’s own behavior indicated in a user profile to determine that user expected responses to a deviation portion, the user’s behavior in Wheatly does not determine the deviation itself, Examiner notices that Wheatley discloses the user profile may include emotional response data generated by monitoring the user (e.g., in real-time) while the user consumes a media asset (paragraph 0031). The user profile is created based on user activity, behavior/selection (see include, but are not limited to, paragraphs 0048, 0050-0051; Ward (6756997: col. 28, lines 15-50, col. 29, lines 22-51; Yates: 0012-0013; Shimy: paragraphs 0052, 0088, 0091-0092, 0116, 0212-0122. Ward, Shimy, Yates are fully incorporated by reference in Wheatly. Since Wheatly discloses user profile comprises user behaviors/user selections/reaction and using the data in user profile to determine deviation. Thus, Wheatly disclose using user’s behavior/action/selection to determine the deviation. Jiang also discloses this teaching as discussed on pages 8-9 of the non-final rejection. In response to applicant's arguments against the references individually (e.g., determining baseline behaviors and a behavioral deviation using user physical gestures from video recording (as in amended claims 35 and 45) is not the same as determining baseline behaviors and a behavioral deviation using computing activity on an endpoint device because physical gestures from a video recording are distinct from device computing data (as in Jiang)), one cannot show nonobviousness by attacking references individually where the rejections are based on combinations of references. See In re Keller, 642 F.2d 413, 208 USPQ 871 (CCPA 1981); In re Merck & Co., 800 F.2d 1091, 231 USPQ 375 (Fed. Cir. 1986). In this case, although a computing activity performed by a user could be interpreted as physical gesture. However, the rejection does not rely on Jiang for teaching of physical gesture from a video recording. Instead, the rejection relies on Archibong or Mickelsen or Kannan or Blong for teaching of activity or behavior comprises physical gestures from a video recording as discussed on pages 9-10 of the non-final rejection. Therefore, the combination of the reference disclose all claimed limitations recited in the amended claims 35, 45. For the reasons given above, rejections of claims 35-36, 40, 45-46, 50-53, 55-59, 61-62 are discussed below. Claims 1-34, 37-39, 41-44, 47-49, 54, 60 have been canceled. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 35, 40, 45, 50, 52-53, 55-56, 58-59, 61-62 are rejected under 35 U.S.C. 103 as being unpatentable over Wheatley et al. (US 20150181291) in view of Jiang et al. (US 10061916), in view of either Archibong et al. (US 20140068692: paragraphs 0177, 0225) or Mickelsen (20170264920: paragraphs 0088-0089) or Kannan et al. ( 20130212606: paragraphs 0040, 0043), and further in view of Blong et al. (US 20160366203). Note: all documents that are directly or indirectly incorporated by references in Wheatly (see paragraphs 0025, 0046, 0048, 0051, 0053, 0079, 0086) including Ser. No. 14/137,552 (corresponding to US 20150178511 -hereinafter referred to as K511, see also K511 (paragraphs 0020, 0031, 0052 for fully incorporated by references including Ser. No. 18/038,158 (US 20150033258), Ser. No. 14/038,046 (US 20150033262), Ser. No. 14/038,171 (US20150033245), Ser. No. 14/038257 (US 20150033266-K266) Ser. No. 14037984 (US20150029087), Ser. No. 14/038,044 (US20150033259), US 6756997 (referred to as Ward), US 20110069940 (hereinafter referred to as Shimy) or Mickelsen (para. 0077) including Ser No. 13691557 (corresponding to US 20130205314) are treated as part of the specification of Wheatley or Mickelsen respectively (see for example, MPEP 2163.07 b). Regarding claim 35, Wheatley discloses a method comprising: receiving a video recording of a user, wherein the video recording is made using an electronic device comprising a camera while the user is viewing a media asset (receiving a video recording of user emotional response/input/reaction/movement/leaving/entering an area by an electronic device comprising a camera while the user is consuming a media asset – see include, but are not limited to, figure 7, paragraphs 0002, 0022, 0031, 0040, 0050, 0057, 0090; Shimy: paragraphs 0052, 0088, 0091-0092, 0116, 0121-0122; Ward: col. 28, lines 15-52; K266: paragraphs 0132, 0228; K511: paragraphs 087, 0094); determining, based on the received video recording, a plurality of user physical actions (determining, based on the received video recording of user physical activities/responses, a plurality of user actions such as entering, leaving, selection, emotional responses of scaring, anxiety, switching, etc. – figures 5A-7, paragraphs 0031, 0050, 0056-0059, 0123, 0129; Shimy: figures 9-11; Ward: col. 29, line 1-col. 30, line 21); generating a user baseline behavior dataset based on the plurality of user physical actions (generate an average or baseline or threshold, normal behavior/responses/scared level, etc. based on the plurality of user physical actions - see include, but are not limited to, Wheatly: figure 7, paragraphs 0028-0029, 0129-0130,0103-0104; Shimy: paragraphs 0052, 0088, 0091-0092); based at least in part on determining that user physical action of the plurality of user physical actions matches a behavior model, identifying the physical action from the user baseline behavior dataset (based at least in part on determining that physical action of the plurality of user actions such emotional response or movement greater than the representative emotional response/movement for the media asset (e.g., the user is more scared than a typical user or user leaves the area), identifying the physical action from the user baseline/typical/normal behavior dataset – see include, but are not limited to, paragraphs 0100-0104, 0122-0123, 0129-0130; Shimy: figures 9, 11-14, paragraphs 0052, 0088, 0091-0092, 0116, 0121-0122); determining, based at least in part on the user baseline behavior dataset, a specific user physical action of the plurality of user physical actions comprising a deviation of behavior of the user from the user baseline behavior dataset (see include, but are not limited to, Wheatley: paragraphs 0028-0029, 0031, 0115, 0130; K266: paragraph 0154; Shimy: paragraphs 0052, 0088, 0091-0092, 0116, 0121-0122); and in response to determining that the specific user physical action matches an action within the media asset, applying a parental control restriction setting for the media asset ( in response to determining that the specific user action of response, emotional level matches (e.g., less than or below scary level associated with the portion of media asset or match with information in “undesired portion”, character die, within the media asset, applying a control setting for the media asset to be removed or replaced or alerted – see include, but are not limited to, Wheatley: paragraphs 0002, 0005, 0020, 0031-0033, 0036, 0049-0050, 0054, 0059-0060, 0100-0103; Shimy: figure 13, paragraphs 0044, 0124-0126, 0129-0131). Wheatley does not explicitly disclose physical actions comprises term “physical gesture” comprising jumping, screaming, based on determining user physical gesture matches an abnormal behavior model, excluding the physical gesture from the user baseline behavior dataset; determining user physical gesture matches a character action of a character. Additionally and/or alternatively, Jiang discloses a method comprising: receiving a recording of a user, wherein the recording is made using an electronic device (receiving a recording of user/child behaviors using an electronic device such as monitoring module 104– see include, but are not limited to, figures 1, 3, col. 4, lines 25-32, col. 8, lines 1-35); generating a user baseline behavior dataset based on a plurality of user physical actions (baseline behaviors 122 -see include, but are not limited to, figures 1-3, col. 1, lines 36-47); based at least in part on determining that a physical action of the plurality of user physical actions matches an abnormal behavior model, excluding the physical action from the user baseline behavior dataset (based at least in part on determining that a behavior/activity of the user behaviors matches the unusual or unexpected behavior 124, excluding the unusual or unexpected behavior from the baseline behaviors 122 – -see include, but are not limited to, figures 1-3, col. 1, lines 36-47, col. 2, lines 1-52, col. 4, lines 27-42, col. 5, lines 25-30, col. 8, lines 45-67); determining, based at least in part on the user baseline behavior dataset, a specific user physical action of the plurality of user physical actions comprising a deviation of behavior of the user from the user baseline behavior dataset; and in response to determining the specific user physical action (e.g., unusual behavior) matches a behavior associated with the peer/character/media asset, applying a parental control restriction setting for the behavior of the peer/media asset (-see include, but are not limited to, figures 1-3, col. 1, lines 36-47, col. 2, lines 1-52, col. 5, line 56-col. 6, line 25, lines 57-65, col. 8, line 43-col. 9, line 27, col. 10, lines 45-62, col. 11, line 47-col. 12, line 30). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Wheatley with the teachings including based on determining user action matches an abnormal behavior model, excluding the action from the user baseline behavior dataset as taught by Jiang in order to yield predictable result of preventing a child from engaging in potentially harmful behaviors therefore improving parental control systems (see col. 1, lines 25-27, 45-47). Wheatley in view of Jiang does not explicitly disclose physical action comprising term “physical gestures” comprising jumping, screaming . Archibong or Mickelsen or Kannan (hereinafter referred to as Archibong/Mickelsen/Kannan) discloses receiving a video recording of a user, wherein the video recording is made using an electronic device comprising a camera while the user is viewing a media asset; determining, based on the received video recording, a plurality of user physical gestures (Archibong: paragraphs 0171-0172, 0176-0177, 0225; Mickelsen: paragraphs 0077-0078, 0081,0088-0089; Kannan paragraphs 0030, 0035-0036, 0040, 0043); Archibong/Mickelsen/Kannan also discloses physical action comprises physical gestures (Archibong paragraphs 0177, 0225; or Mickelsen: paragraphs 0088-0089; Kannan: paragraphs 0040, 0043). Mickelsen also discloses physical gesture comprising screaming of the user (paragraph 0089). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Wheatley in view of Jiang with the teaching of physical gesture(s) as taught by Archibong/Mickelsen/Kannan in order to yield predictable result of accurately and/or convenience to control device/content displayed based on gesture of user (see for example, Archibong: paragraph 0177; Mickelsen: paragraph 0089). Belong discloses determining that specific user physical actions comprising jumping of user, screaming of user (see include, but are not limited to, paragraphs 0043, 0046, 0047-0048, 0076). determining that physical gesture comprising user screaming matches a character action of a character within media asset, applying setting for the media asset (e.g., user scream matches an actor in the movie scream, reaction monitoring device captures the user reaction/gesture including the users screaming and applying a setting/link for the media content – see include, but are not limited to, paragraphs 0043-0044). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to further modify Wheatley with the teaching of specific user action comprising screaming, jumping of user, wherein the user action matches a character action of character within media asset as taught by Blong in order to yield predictable result of monitoring user action based on character action and to conserve resource and save time for capturing user’s reaction (paragraph 0011). Regarding claim 40, Wheatley in view of Jiang, Archibong/Mickelsen/Kannan, and Blong discloses the method of claim 35,wherein the parental controls restriction setting excludes the first user from consuming any one of a single episode, an entire series of episodes, and a scene from a single episode (replacing or prohibiting or preventing or removing undesired content portion- see include, but are not limited to, Wheatley: paragraphs 0001, 0035, 0101, 0119; Shimy: paragraphs 0129, 0130, 0159). Regarding claim 45, limitations of a system that correspond to the limitations of method in claim 35 are analyzed as discussed in the rejection of claim 1. Particularly, Wheatley in view of Jiang, Archibong/Mickelsen/Kannan, and Blong discloses a system (see include, but are not limited to, Wheatley: figures 3-4; Jiang: figures 1-2) comprises: input/output circuitry configured to: receive a video recording of a user, wherein the video recording is made using an electronic device comprising a camera while the user is viewing a media asset; and control circuitry configured to: determine, based on the received video recording, a plurality of user physical gestures; generate a user baseline behavior dataset based on the plurality of user physical gestures; based at least in part on determining that user physical gesture comprising jumping of the plurality of user physical gestures matches an abnormal behavior model, exclude the user physical gesture from the user baseline behavior dataset; determine, based at least in part on the user baseline behavior dataset, a specific physical gesture comprising screaming of the plurality of user physical gestures comprises a deviation of behavior of performed by the user from the user baseline behavior dataset; and in response to determining that the specific user physical gesture comprising screaming matches a character action of a character within the media asset, apply a parental control restriction setting for the media asset (see similar discussion in the rejection of claim 35 and include, but are not limited to, Wheatley: figures 3-4; Jiang: figures 1-3; Archibong: paragraphs 0171-0172, 0176-0177, 0225; Mickelsen: paragraphs 0077-0078, 0081,0088-0089; Kannan paragraphs 0030, 0035-0036, 0040, 0043). Regarding claim 50, the additional limitations that correspond to the additional limitations of claim 40 are analyzed as discussed in the rejection of claim 40. Regarding claim 52, Wheatley in view of Jiang, Archibong/Mickelsen/Kannan, and Blong discloses the method of claim 35. Wheatley in view of Jiang and Blong discloses determining the plurality of user physical gestures comprises using image recognition technique to detect user physical gesture (reaction, movement, leaving/entering, hand gesture, eye movement, blinks, see include, but are not limited to, Wheatley: paragraphs 0056-0057; K511: paragraphs 0067, 0094; Shimy: figures 3, 6, 8-10, paragraphs 0050-0051, 0057, 0088, 0090, 0093, 0097-98. 0130; Blong: paragraphs 0045, 0049, 0084; Jiang: col. 8, lines 1-67 Archibong: paragraphs 0177, 0225; Mickelsen: paragraphs 0088-0089; Kannan paragraphs 0030, 0035-0036, 0040, 0043). Regarding claim 53, Wheatley in view of Jiang, Archibong/Mickelsen/Kannan, and Blong discloses the method of claim 35, wherein the electronic device comprises a wearable device of the user (wearable or mobile device – Jiang: col. 6, lines 11-17; Wheatley: paragraphs 0040, 0086). Regarding claim 55, Wheatley in view of Jiang, Archibong/Mickelsen/Kannan, and Blong discloses the method of claim 35, wherein determining whether the specific user physical gesture matches the character action comprises: determining the specific user physical gesture comprises a video action; and determining whether the video action matches one or more actions associated with the character within the media asset (see include, but are not limited to, Blong: figures 1, 5C-7B, paragraphs 0037-0039, 0043, 0046, 0048-0049; Archibong: paragraphs 0176-0177, 0225; Mickelsen: paragraphs 0077-0078, 0081,0088-0089; Kannan paragraphs 0030, 0035-0036, 0040, 0043). Regarding claim 56, Wheatley in view of Jiang, Archibong/Mickelsen/Kannan, and Blong discloses the method of claim 35, further comprising: detecting characters/actors/conflict information in all portions/assets of video and apply setting to remove/replace or prevent displaying of all undesired character/portions (see include, but are not limited to, Wheatley: paragraphs 0032-0035, 0041, 0054, 0100-0103, Jiang: col. 8, line 1+; Shimy: figure 19, paragraphs 0129-0130). Obviously, the method comprises determining whether the character appears in one or more additional media assets other than the media asset; and in response to the determination that the character appears in the one or more additional media assets other than the media asset: applying an additional parental control restriction setting for the one or more additional media assets so that character/actor or information associated with undesired scenes/portions of all media assets of program is restricted from displayed or removed/modified . Regarding claims 58-59, 61-62, the additional limitations of the system that correspond to the limitations of the additional limitations of the method in claims 52-53, 55-56 are analyzed as discussed in the rejection of claims 52-53, 55-56. Claims 36, 46, 51, 57 are rejected under 35 U.S.C. 103 as being unpatentable over Wheatley et al. (US 20150181291) in view of Jiang et al. (US 10061916) , Archibong/Mickelsen/Kannan, and further in view of Blong et al. (US 20160366203) as applied to claim 35 or 45 above and further in view of Hildreth (US 20090133051). Regarding claim 36, Wheatley in view of Jiang, Archibong/Mickelsen/Kannan, and Blong discloses the method of claim 35, further comprising, determining an age of the user (e.g., child, parent, dad, adult, and user profile contain age or age group– see for example, K511: paragraph 0024; Shimy: paragraph 0054, 0118, 0124, 0129, 0131, 0170, Jiang: col. 8, lines 30-65), wherein the determination comprises: determining the recording includes one or more images of the first user (image of users as a child, parent, adult, etc. - see for example, K511: paragraph 0024; Shimy: figures 6-9, paragraph 0054, 0118, 0124, 0129, 0131, 0170, Jiang: col. 8, lines 30-65); performing an image recognition analysis of the one or more images of the user (performing an image recognition/detection analysis of one or more images of the user to identify a particular user as a child, adult, parent, etc. - - see for example, K511: paragraph 0024; Shimy: figures 6-9, paragraph 0054, 0118, 0124, 0129, 0131, 0170, Jiang: col. 8, lines 30-65); and determining the identity of the user based on results of the image recognition analysis (- see for example, K511: paragraph 0024; Shimy: figures 6-9, paragraph 0054, 0118, 0124, 0129, 0131, 0170, Jiang: col. 8, lines 30-65). However, Wheatley does not explicitly discloses determining age range of user. Hildreth discloses determining an age range of a user, wherein the determination comprises: determining the recording includes one or more images of the first user; performing an image recognition analysis of the one or more images of the user; and determining the age range of the user based on results of the image recognition analysis (performing image recognition analysis (size, facial feature, skin texture, etc.) to determine the age range of age 12 and below (child user), Age 13 to 18 (teen user), age 18 and above (adult user) – see include, but are not limited to, figures 11-12, 15, paragraphs 0166, 0168). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to further modify Wheatley with the teachings including determining age range as taught by Hildreth in order to yield predictable result of automatically and passively controlling access to television and video content based on age range of the recognized user (paragraphs 0038, 0135). Regarding claim 51, Wheatley in view of Jiang, Archibong/Mickelsen/Kannan, Blong and Hildreth discloses the method of claim 36, wherein the abnormal behavior model comprises a child abnormal behavior model template, a teenager abnormal behavior model template, or an adult abnormal behavior model template, and wherein an abnormal behavior template is determined based on the determined age range of the user (see include, but are not limited to, Hildreth: figures 11-12, 15, paragraphs 0166, 0168). Regarding claims 46 and 57, the additional limitations of the system that correspond to the additional limitations of claims 46, 51 are analyzed as discussed in the rejection of claims 46, 51. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Benedetto et al. (US 20190232168) discloses dynamic allocation of contextual assistance during game play comprising user physical behavior comprises screaming, shouting, and deviation from baseline can be correlated with frustrating situation in the game to establish a pattern by which to identify how a particular express frustration (see paragraph 0038). Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to AN SON PHI HUYNH whose telephone number is (571)272-7295. The examiner can normally be reached 9:00 am-6:30 pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, NASSER M. GOODARZI can be reached on 571-272-4195. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /AN SON P HUYNH/Primary Examiner, Art Unit 2426 March 11, 2026
Read full office action

Prosecution Timeline

Nov 08, 2023
Application Filed
Sep 24, 2024
Non-Final Rejection — §103
Dec 16, 2024
Response Filed
Dec 30, 2024
Final Rejection — §103
Apr 01, 2025
Applicant Interview (Telephonic)
Apr 01, 2025
Examiner Interview Summary
Apr 02, 2025
Request for Continued Examination
Apr 08, 2025
Response after Non-Final Action
Apr 23, 2025
Non-Final Rejection — §103
Jul 28, 2025
Response Filed
Aug 26, 2025
Final Rejection — §103
Sep 26, 2025
Interview Requested
Oct 08, 2025
Request for Continued Examination
Oct 17, 2025
Response after Non-Final Action
Nov 20, 2025
Non-Final Rejection — §103
Jan 28, 2026
Interview Requested
Feb 03, 2026
Examiner Interview Summary
Feb 10, 2026
Response Filed
Mar 11, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12534225
SATELLITE DISPENSING SYSTEM
2y 5m to grant Granted Jan 27, 2026
Patent 12441265
Mechanisms for moving a pod out of a vehicle
2y 5m to grant Granted Oct 14, 2025
Patent 12434638
VEHICLE INTERIOR PANEL WITH ONE OR MORE DAMPING PADS
2y 5m to grant Granted Oct 07, 2025
Patent 12372654
Adaptive Control of Ladar Systems Using Spatial Index of Prior Ladar Return Data
2y 5m to grant Granted Jul 29, 2025
Patent 12365469
AIRCRAFT PROPULSION SYSTEM WITH INTERMITTENT COMBUSTION ENGINE(S)
2y 5m to grant Granted Jul 22, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

7-8
Expected OA Rounds
67%
Grant Probability
80%
With Interview (+13.6%)
2y 5m
Median Time to Grant
High
PTA Risk
Based on 166 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month