Prosecution Insights
Last updated: April 19, 2026
Application No. 18/592,213

GAZE-BASED AUDIO SWITCHING AND 3D SIGHT LINE TRIANGULATION MAP

Non-Final OA §103§112
Filed
Feb 29, 2024
Examiner
REYNOLDS, DEBORAH J
Art Unit
2400
Tech Center
2400 — Computer Networks
Assignee
Adeia Guides Inc.
OA Round
3 (Non-Final)
67%
Grant Probability
Favorable
3-4
OA Rounds
2y 5m
To Grant
80%
With Interview

Examiner Intelligence

Grants 67% — above average
67%
Career Allow Rate
111 granted / 166 resolved
+8.9% vs TC avg
Moderate +14% lift
Without
With
+13.6%
Interview Lift
resolved cases with interview
Typical timeline
2y 5m
Avg Prosecution
80 currently pending
Career history
246
Total Applications
across all art units

Statute-Specific Performance

§101
6.9%
-33.1% vs TC avg
§103
47.6%
+7.6% vs TC avg
§102
19.1%
-20.9% vs TC avg
§112
17.9%
-22.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 166 resolved cases

Office Action

§103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 12/17/2025 has been entered. Response to Arguments Applicant’s arguments with respect to claims 1-20 have been considered but are moot as discussed in the new ground of rejection below. Applicant argues the cited art, Golyshko, with its fully incorporated by references including Shimy, and Huang does not disclose the amended claim because both Shimy and Golyshko are patentably distinct from system in which different content items (from different servers) with different titles are switch between devices. The amended claim 1 recites “the content items streaming on the two devices of the Applicant’s claims have “a first audio/video stream of a first content items with a first title and “a second audio/video stream of a second content item with a second title different from the first title”, which are “from a first server of a first content provider” and “from a second server of a second content provider different from the first server,” while he video and audio stream of Huang is a single stream of the same call from the same server to the two device. Thus, Huang fails to teach these element (pages 8-9). This argument is respectfully traversed. It is noted that the amended claims do not recite “different content items from different servers with different titles are switch between devices”. In response to applicant's arguments against the references individually (i.e., the video and audio stream of Huang is a single stream of the same call from the same server to the two devices), one cannot show nonobviousness by attacking references individually where the rejections are based on combinations of references. See In re Keller, 642 F.2d 413, 208 USPQ 871 (CCPA 1981); In re Merck & Co., 800 F.2d 1091, 231 USPQ 375 (Fed. Cir. 1986). In this case, the teaching of “first audio/video stream of a first content item with a first title is being provided to a first device associated with a first user from a first server of a first content provider” and “second audio/video stream of a second content item with a second title different from the first title is being provided to a second device associated with a second user from a second server of a second content provider different from the first server, wherein the second content item of the first audio/video stream is different from the first content item of the first audio/visual stream and wherein the first title of the first content item is different from the second title of the second content item” is taught by Golyshko and its fully incorporated by references including Shimy (see Golyshko: figure 1, paragraphs 0056-0058, 0061, 0069, 0096; Shimy: figures 1, 11-12, 14, paragraphs 0036-0038, 0072, 0122, 0135, 0137, 0167; Yates: figures 4, 6, 8, 13, paragraphs 0046, 0055-0056, 0088, 0099, wherein “first audio/video stream of a first content item with a first title is being provided to a first device associated with a first user from a first server of a first content provider” and “second audio/video stream of a second content item with a second title different from the first title is being provided to a second device associated with a second user from a second server of a second content provider different from the first server, wherein the second content item of the first audio/video stream is different from the first content item of the first audio/visual stream and wherein the first title of the first content item is different from the second title of the second content item” is read on first media stream of media content 1/program with a first title (e.g., Simpsons/action movie) being provided to a first device of a first user (e.g., wife) from a first server/first provider/first source/broadcaster of “Simpsons” or action movie and second content item/media content 2 or any different or other content item (e.g., sitcom, Seinfeld – the contest”, HBO on demand, Friends, etc.) provided to second device of second user (e.g., husband or another other user) from a second server/provider/broadcaster. Huang and/or newly discovered Nguyen (US 20140362201) is relied on for teaching for causing an audio portion of first audio/video stream to be played by second device simultaneously with the fist audio/visual stream at the first device while simultaneously continuing to play the audio portion of the first audio/video stream at the first device (see Huang, for example, paragraph 0133, 0295 for teaching of in response to user is not looking at the display of wrist-wearable device, which cause the video no longer be present at the wrist-wearable device and cause both speaker of the wrist-wearable device and speaker of the smart glasses 150 to present received audio data or see Nguyen (figure 9, paragraph 0047) which describes in response to user of second device with screen D is looking in direction 905 at screen C of the first user, causing an audio portion (audio signal C) of first audio/visual stream to be displayed by second device simultaneously with first audio/visual stream at first device while simultaneously continuing to play the audio portion of the first audio/video stream at first device). Therefore, the combination of the references discloses all limitation in amended claims. For the reasons given above, rejections of claims 1-20 are discussed below. Claims 21-75 have been canceled. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. Claims 12-20 are rejected under 35 U.S.C. 112(b) as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor, regards as the invention. Independent claim 12 recites limitation “a second content item with a second title different” (in lines 6-7) and “the a first title” in line 10 are indefinite because the claim boundaries are unknown. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over Golyshko (US 20150319400) in view of Huang et al. (US 20220337697) and/or Nguyen et al. (US 20140362201). Note: all documents that are directly or indirectly incorporated by reference in their entireties in Golyshko (see include, but are not limited to, paragraphs 0032, 0059, 0061, 0064, 0066, 0096, 0103) including US 20110069940 (referred to as Shimy), US 8046801 (Ellis), 20100153885 (Yates), 7761892 (E892) or Huang et al. (US 0001, 0305) are treated as part of the specification of Golyshko or Huang respectively (see for example, MPEP 2163.07 b). Regarding claim 12, Golyshko discloses a system comprising: control circuitry configured to: determine that a first audio/video stream of a first content item with a first title is being provided to a first device associated with a first user from a first server of a first content provider (e.g., system comprising control circuitry 304 to: determine a first audio/video stream with a first title such as Simpson or name of action movie, etc. is being provided to a first device such as user television equipment in viewing area associated with a first user from a first server of the first content provider/source/broadcaster – see include, but are not limited to, figures 1, 3-7, paragraphs 0056-0058, 0061, 0067-0069, 0071-0072, 0096; Shimy: figures 1, 11-12, 14, paragraphs 0036-0038, 0072, 0122, 0135, 0137, 0167; Yates: figures 4, 6, 8, 13, paragraphs 0046, 0055-0056, 0088, 0099 and discussion in “response to arguments” above); determine that a second audio/video stream of a second content item with a second title (e.g., second title/program name of sitcom or friends/Seinfeld, etc. from another channel/source) different is being provided to a second device associated with a second user from a second server of a second content provider different from the first server, wherein the second content item of the second audio/video stream is different from the first content item of the first audio/video stream and wherein the first title of the first content item is different from the second title of the second content item (determining that a second audio/video stream is being provided to a second user device such as user computer equipment, wireless user device associated with a second user, wherein the audio/video stream to the first device is different from the video/audio stream to the second device and the second title is different from the first title– see include, but are not limited to, figures 1, 3-8, paragraphs 0056-0058, 0061, 0069, 0078-0079, 0091, 0096, 0125; Shimy: 1, 3-4, 9-12, 14, 17-20, paragraphs 0036-0038, 0072, 0122, 0135, 0137, 0167; Yates: figures 4, 6, 8, 13, paragraphs 0046, 0055-0056, 0088, 0099, 0137-0140 and discussion in “response to arguments” above); determine that a gaze of the second user is directed to a display of the first device (determining, via the detecting module/detecting circuitry, that a gaze/focus or viewing of a second user is directed to a display (e.g., 312) of the first device or viewing another title program name/title on another device– see include, but are not limited to, figures 3, 5, paragraphs 0048, 0072, 0074, 0078, 0109-0110, 0114-0118, 0122, 0135, 0137, 0167); input/output circuitry (input/output circuitry to display, speaker, user input interface, detecting module/circuitry, communication link 302, etc. – figures 3-5, Shimy: figures 3-4), in response to the determining that the gaze of the second user is directed to the display of the first device, configure to: (in response to the determining to the gaze/focus/looking of the one or more of users/second user is directed/returned/focused to the first device – see include, but are not limited to, figure 5, paragraphs 0074-0075, 0078-0079, 0109-0110, 0113-0118, 0122, 0138; Shimy: paragraphs 0080, 0103, 0105, 0116, 0120, 0122, 0135, 0137, 0140, 0167, figures 11-12, 14, 17, 19-20), configure to: cause the second audio/video stream to become paused at the second device (cause the audio/video stream to be paused at second user device when the user is detected or no longer viewing video on the second device – see include, but are not limited to, figures 5, 7, paragraphs 0072, 0074-0075, 0078-0079, 0109-0110, 0113-0118, 0122, 0138; Shimy: paragraphs 0080, 0103, 0135, 0137, 0140, 0189, figures 11-12, 14, 17, 19-20). Golyshko further discloses causing portion of content of the first audio/visual stream to be played by the second device while continuing to play the audio portion of the first audio/visual stream at the first device (see include, but are not limited to, figures 3-8, paragraphs 0078-0079, 0091, 0125; Shimy: figures 11-12,14, 20, paragraphs 0135, 0137-0140, 0157); audio component of videos and other media content displayed on display 312 may be played through speakers 314 or to receiver (paragraphs 0072, 0108; Shimy: paragraphs 033, 0061, 0063). However, Golyshko does not explicitly disclose causing an audio portion of the first audio/video stream to be played by the second device simultaneously with the first audio/visual at the first device while simultaneously continuing to play the audio portion of the first audio/video stream. Additionally and/or alternatively, Huang discloses input/output circuitry: in response to determining that a gaze of user is directed to the display of first device, configured to: cause second audio/video stream to become paused at second device; and cause audio portion of the first audio/video stream to be played by the second device with the first audio/visual stream at the first device while continuously to play the audio portion of the first audio/stream at the first device (in response to determining that a gaze/focus/attention of user is directed to display of first device such as smart glass, other device and video-viewing precondition is not present at the wrist-wearable device/portable device: cause video/visual stream to become paused at wrist-wearable device 102/portable device 103; and cause audio portion of audio/video stream being displayed on smart glass, other device/television, to be played by both speakers of the by the wrist-wearable device 102/portable device 103 and speaker of the smart glass– see include, but are not limited to, figures 1C-1E, 1I-2B, 6, 8A, 8D, paragraphs 0007, 0013, 0032-0033, 0121, 0124-0126, 0133, and discussed in the “response to arguments” above). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Golyshko with the teachings including causing an audio portion of first audio/video stream to be played by second device to be played on both the speakers of the first device and second device as taught by Huang in order to yield predictable result of easily, seamlessly, and automatically switches between video and other calling modes – see for example, paragraphs 0006-0007, 0014) or saving power for the wrist-wearable device (paragraphs 0031-0032). Additionally and/or alternatively, Nguyen discloses in response to determine that a gaze of a second user is directed to a display device, configured to: cause an audio portion of first audio/video stream to be played by a second device simultaneously with first audio/visual stream at first device while simultaneously continuing to play the audio portion of the first audio/video stream at the first device (see for example, figure 9, paragraph 0047). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to further modify Golyshko with the teaching including cause an audio portion of video stream to be played by a second device simultaneously with first audio/visual stream at a first device while simultaneously continuing to play the audio portion of first audio/video stream at the first device as taught by Nguyen in order to yield predictable result of enabling two different users to independently view and listen to two complete different programs on different screen can simultaneously listen to same audio signal that the users are looking at (paragraphs 0002, 0028). Regarding claim 13, Golyshko in view of Huang and Nguyen discloses the system of claim 12, wherein the first device and the second device are connected to a wireless network (e.g., same wireless such as Bluetooth, Wi-Fi, infrared, etc. – see include, but are not limited to, Golyshko: paragraphs 0093-0094; Shimy: paragraphs 0066, 0069-0070); Huang: figures 1K, 4A-4C, paragraphs 0169, 0239, 0262). Regarding claim 14, Golyshko in view of Huang and Nguyen discloses the system of claim 13, wherein further to causing the second audio/video stream to become paused at the second device, the control circuitry is further configured to: identify the second audio/video stream being provided to the second device via the wireless network based on metadata associated with the second audio/video stream (identify the second video stream being provided to the second device such as mobile phone or other device via a wireless network based on metadata such as title, time, playback point, etc. associated with the second audio/visual stream – see include, but are not limited to, Golyshko: figures 3-7, paragraphs 0093-0094, Shimy: figures 11, 16, 20, paragraphs 0069-0070, 0116, 0138-0140, 0170-0171; Huang: figures 1C-1E, 2A-2B, 8E, paragraphs 0129-0130); and temporarily prevent delivery of the second audio/video stream to the second device via the wireless network (pausing or temporarily prevent delivery of the second audio/video content to the second/other device - – see include, but are not limited to, Golyshko: figures 7-8, paragraphs 0128, Shimy: figures 11, 16, 20, paragraphs 0069-0070, 0116, 0138-0140, 0170-0171; Huang: figures 1C-1E, 2A-2B, 8E, paragraphs 0010-0015, 0032, 0040). Regarding claim 15, Golyshko in view of Huang and Nguyen discloses the system of claim 13, wherein further to causing the audio portion of the first audio/video stream to be played by the second device, the control circuitry is further configured to: receive over the wireless network the first audio/video stream; identify the first audio/video stream being provided to the first device via the wireless network based on metadata associated with the first audio/video stream (see similar discussion in the rejection of claim 14, and include, but are not limited to, Golyshko: Golyshko: figures 3-5, paragraphs 0078-0079, 0091-0094, 0103, 0107; Shimy: figures 3, 7-11, 14, 15-17, 20, paragraphs 0069-0070, 0116, 0138-0140, 0170-0171; Huang: figures 3C-5C, 10, paragraphs 0032, 0117-0118, 0121, 0129-0130); decode the audio portion of the first audio/video stream (decoding the audio portion to display at the first device and/or second device – see include, but are not limited to, Golyshko: figure 3, paragraphs 0071-0072, 0076, 0108; Shimy: figure 3, paragraphs 0048, 0056, 0061; Huang: figures 3C-5C, 10, paragraphs 0117-0118, 0121, 0129-0130, 0141, 0155, 0229); combine and encoding the audio portion of the first audio/video stream; and distribute, over the wireless network, the audio portion of the first audio/video stream to the second device (combine and encoding the audio portion of the first audio/video stream for distributing, over the wireless network, the audio portion of the first audio/video stream to the second device with speaker for playing back on second device - see include, but are not limited to, Golyshko: figures 3-5, paragraphs 0071-0072, 0076, 0108; Shimy: figures 3-4,7-11, 14-16, 20, paragraphs 0048, 0056, 0061; Huang: figures 1C-1E, 1I-2B, 6, 8A, 8D, paragraphs 0032-0033, 0121, 0124-0126, 0133, 0290). Regarding claim 16, Golyshko in view of Huang and Nguyen discloses the system of claim 15, wherein the control circuitry is further configured to: distribute the audio portion of the first audio/video stream to the first device simultaneously with distributing the audio portion of the first audio/video stream to the second device ; and synchronize the audio portion of the first audio/video stream distributed to the first device with the audio portion the first audio/video stream distributed to the second device (distribution/send video portions to two different devices and synchronize the audio portions so that the audio portions are played on two devices at the same time when content including audio portions are simultaneously played on different devices or in predetermined order – see include, but are not limited to, Golyshko: figures 3-4, paragraphs 0072, 0087, 0100; Shimy: figure 20, paragraphs 0034, 0048, 0061, 0139-0140, 0151; Huang: figures 3A, 4A-6, 8A, paragraphs 0214, 0288, 0290-0291) . Regarding claim 17, Golyshko in view of Huang and Nguyen discloses the system of claim 14, wherein the control circuitry is further configured to: detect that the gaze of the second user is no longer on the first device (content viewing is not present on the first device or viewer leave to viewing area – see include, but are not limited to, Golyshko: figures 5-8, paragraphs 0009, 0048, 0074-0076, 0110-0118, 0122, 0138, 0147; Shimy: figures 11, 14, 20, paragraphs 0139-0140); terminate the audio portion of the first audio/video stream via the wireless network to the second device (pause/terminate audio portion of the first video/audio stream via wireless network to second device/other device); and resume delivery of the second audio/video stream to the second device via the wireless network (pause/terminate the audio portion of the first audio/video stream via the wireless network to the second device; and resume delivery the second audio/video portion to the second device when user returns or detected at or looking at the second device – see include, but are not limited to, Golyshko: figure 8, paragraphs 0039, 0128; Shimy: paragraphs 0137-0140, 0148, 0152, 0156, 0162; Huang: paragraphs 0030, 0175, 0184, 0234). Regarding claim 18, Golyshko in view of Huang and Nguyen discloses the system of claim 13, wherein the control circuitry is further configured to: determine that a gaze of the first user is not directed to the display of the fist device or the second device; continue delivery of the audio portion of the first audio/video stream to the first device via the wireless network (determine/detect at a gaze/focus/viewing/looking of user is not present/directed to a display of first device or second device such as wrist-wearable device/portable device, etc. continue delivering of the audio portion (audio mode) of the first audio/video stream to the first display device via the wireless network – see include, but are not limited to, Huang: figures 1A, 1J, 2A, 2B, 5C. 7, paragraphs 0013, 0019, 0118, 0121). Regarding claim 19, Golyshko in view of Huang and Nguyen discloses the system of claim 12, wherein the control circuitry configured to determine that the gaze of the second user is directed to the display of the first device is further configured to: receive an image captured from the first device; and recognize the image as a face of the second user (receive an image captured by detecting device/camera/recorder from the first device and recognize the image as a face of the second user – see include, but are not limited to, Golyshko: figures 3, 5, paragraphs 0113-0115, 0117; Shimy: figure 3, paragraphs 0050; Huang: figures 1F, 1I, 1L, 5A-5B, paragraphs 0021, 0022, 0105, 0127, 0195, 0197). Regarding claim 20, Golyshko in view of Huang and Nguyen discloses the system of claim 19, wherein the control circuitry is further configured to: determine, based on the recognized face of the second user, that the second user is authorized to view the first audio/video stream or view content displayed on the first device (see include, but are not limited to, Shimy: figures 7, 19, paragraphs 0050, 0082, 0110-0113, 0166, 0174, 0181, 0184; Huang: figures 2B, 5B-5C, paragraphs 0021-0022, 0024-0025, 0105). Regarding claim 1, limitations of a method that correspond to the limitations of system in claim 12 are analyzed as discussed in the rejection of claim 12. Particularly, Golyshko in view of Huang and Nguyen discloses a method comprising: determining that a first audio/video stream of a first content item with a first title is being provided to a first device associated with a first user from a first server of a first content provider; determining that a second audio/video stream of a second content item with a second title different from the first title is being provided to a second device associated with a second user from a second server of a second content provider different from the first server, wherein the second content item of the second audio/video stream is different from the first content item of the first audio/video stream and wherein the first title of the first content item is different from the second title of the second content item; determining that a gaze of the second user is directed to a display of the first device; and in response to determining that the gaze of the second user is directed to the display of the first device; causing the second audio/video stream to become paused at the second device; and causing an audio portion of the first audio/video stream to be played by the second device simultaneously with the first audio/visual stream at the first device while simultaneously continuing to play the audio portion of the first audio/video at stream at the first device (see similar discussion in the rejection of claim 12). Regarding claims 2-9, the additional limitations of the method that correspond to the additional limitations of the system in claims 13- 20 are analyzed as discussed in the rejection of claims 13-20. Regarding claim 10, Golyshko in view of Huang and Nguyen discloses the method of claim 1, wherein the causing the audio portion of the first audio/video stream to be played by the second device is further in response to: detecting a gesture of the second user directed at the first user (detecting a gesture such as a movement, attention, focus, selection, etc. of the second user directed/selected/called at the first user – see include, but are not limited to, Golyshko: paragraphs 0011, 0015, 0028, 0116-0117, 0146; Shimy: figures 6-9, 11, 14, paragraphs 0051, 0088, 0091-0092, 0102, 0112-0114, 0124, 0130-0131, 0139-0140; Huang: figures 1A-1E, paragraphs 0134, 0181, 0242, 0264, 0284, 0288). Regarding claim 11, Golyshko in view of Huang and Nguyen discloses the method of claim 1, wherein the gaze of the second user is determined based on: maintaining a 3D map of an environment indicating respective 3D locations of each of a plurality of camera devices in the environment (3D or three dimensional location of each of plurality of camera devices in the environment/viewing area - see include, but are not limited to, Golyshko: figures 5-6, paragraphs 0030, 0107, 0116-0117; Huang: paragraphs 0140, 0143, 0197-0198); analyzing first video data from the plurality of camera devices to identify: (a) a 3D location of the second user in the environment; and (b) a 3D location of the first device in the environment; updating the 3D map of the environment indicating the 3D location of the second user and the 3D location of the first device; analyzing second video data from the plurality of camera devices in combination with cross-referencing the updated 3D map of the environment to determine that the gaze of the second user is directed at the first device (analyzing the first video data from the plurality of camera devices to identify/detect: a location of a second/other user enter the area, leave the area, or moving to the particular location, etc., location of the first device in the area or associated with the user; updating the 3D map of environment indicating the 3D location of the second user and the 3D location of user in the viewing/detecting area or live the area, move to another location/room, etc. see include, but are not limited to, Golyshko: figures 5-8, paragraphs 0030, 0107, 0116-0117; Shimy: figures 11, 14-20, paragraphs 0087-0090, 0093-0096; Huang: 4A-5C, paragraphs 0140, 0143, 0195-0198). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Horton et al. (US 20230386469) discloses detecting visual attention during user speech (see also paragraphs 0009-0012). Mushkatblat (US 20120311635) discloses system and method for sharing interactive media guidance guide information. Any inquiry concerning this communication or earlier communications from the examiner should be directed to AN SON PHI HUYNH whose telephone number is (571)272-7295. The examiner can normally be reached 9:00 am-6:30 pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, NASSER M. GOODARZI can be reached at 571-272-4195. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /AN SON P HUYNH/ Primary Examiner, Art Unit 2426 January 10, 2026
Read full office action

Prosecution Timeline

Feb 29, 2024
Application Filed
May 28, 2025
Non-Final Rejection — §103, §112
Aug 28, 2025
Response Filed
Sep 12, 2025
Final Rejection — §103, §112
Dec 17, 2025
Request for Continued Examination
Dec 31, 2025
Response after Non-Final Action
Jan 10, 2026
Non-Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12534225
SATELLITE DISPENSING SYSTEM
2y 5m to grant Granted Jan 27, 2026
Patent 12441265
Mechanisms for moving a pod out of a vehicle
2y 5m to grant Granted Oct 14, 2025
Patent 12434638
VEHICLE INTERIOR PANEL WITH ONE OR MORE DAMPING PADS
2y 5m to grant Granted Oct 07, 2025
Patent 12372654
Adaptive Control of Ladar Systems Using Spatial Index of Prior Ladar Return Data
2y 5m to grant Granted Jul 29, 2025
Patent 12365469
AIRCRAFT PROPULSION SYSTEM WITH INTERMITTENT COMBUSTION ENGINE(S)
2y 5m to grant Granted Jul 22, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
67%
Grant Probability
80%
With Interview (+13.6%)
2y 5m
Median Time to Grant
High
PTA Risk
Based on 166 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month