Prosecution Insights
Last updated: April 19, 2026
Application No. 18/617,937

Systems and Methods of Synchronizing EEG Data with a Patient's Associated Audio and Video Data

Non-Final OA §103§112
Filed
Mar 27, 2024
Examiner
MUTCHLER, CHRISTOPHER JOHN
Art Unit
3796
Tech Center
3700 — Mechanical Engineering & Manufacturing
Assignee
Cadwell Laboratories Inc.
OA Round
1 (Non-Final)
47%
Grant Probability
Moderate
1-2
OA Rounds
3y 5m
To Grant
65%
With Interview

Examiner Intelligence

Grants 47% of resolved cases
47%
Career Allow Rate
22 granted / 47 resolved
-23.2% vs TC avg
Strong +19% interview lift
Without
With
+18.6%
Interview Lift
resolved cases with interview
Typical timeline
3y 5m
Avg Prosecution
44 currently pending
Career history
91
Total Applications
across all art units

Statute-Specific Performance

§101
13.3%
-26.7% vs TC avg
§103
47.3%
+7.3% vs TC avg
§102
15.9%
-24.1% vs TC avg
§112
19.8%
-20.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 47 resolved cases

Office Action

§103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claim 2, and Claims 3-14 by dependency, are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Regarding Claim 2, Claim 2 recites “wherein each of the first, second, third and fourth data is encoded with a predefined unique pattern of a plurality of pulses.” It is unclear whether four different “predefined unique patterns” (i.e., one “predefined unique pattern” for the first, one “predefined unique pattern” for the second, etc.), a single “predefined unique pattern” that includes each of the first, second, third and fourth data, four “predefined unique patterns” that is the same pattern for each of the first, second, third and fourth data, or something else. The scope of the can accordingly not be discerned. For purposes of this Office Action, the limitation “wherein each of the first, second, third and fourth data is encoded with a predefined unique pattern of a plurality of pulses” is being interpreted to mean a single “predefined unique pattern” that includes each of the first, second, third and fourth data. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-15 and 17-18 are rejected under 35 U.S.C. 103 as being unpatentable over CN 111631703 A1 (“CN ‘703”) in view of US 2018/0239430 A1 to Tadi et al. (“Tadi”) and CN 110753261 A2 (“CN ‘261”). Regarding Independent Claim 1, CN ‘703 teaches: A method of enabling synchronization of a patient's EEG data with associated video and audio data for analysis in a computing device, (CN ‘703 Translation at Para. [0001], “The present invention relates to … a method for synchronizing video signals with biopotential signals…;” Para. [0026], “The video signal and biopotential signal synchronization technique provided by the present invention embodiments can be applied to any suitable research and application field of biopotential signals, including but not limited to … electroencephalographic signals, etc.”); wherein the patient's EEG, (Para. [0010], “Step S310: Acquiring the biopotential signal of a test subject, and adding a time stamp to the biopotential signal;” Para. [0026]); video (Para. [0010], “Step S330: Acquiring the video signal of the test subject, and adding a time stamp to the video signal, wherein the audio signal and the video signal have a time synchronization relationship;”); and audio data are acquired over a period of time of monitoring the patient, (Para. [0010], “Step S320: Acquiring the audio signal of the test subject, and adding a time stamp to the audio signal;”); comprising: acquiring, … the EEG data of the patient; (Para. [0010], “Step S310: Acquiring the biopotential signal of a test subject, and adding a time stamp to the biopotential signal;” Para. [0026], “The video signal and biopotential signal synchronization technique provided by the present invention embodiments can be applied to any suitable research and application field of biopotential signals, including but not limited to … electroencephalographic signals, etc.”); generating, using a device, first data indicative of a first differential electrical signal, second data indicative of a second differential electrical signal, [a video synchronization signal,] and a plurality of audio signals (Para. [0035], “The initial synchronization signal may be generated and output by a synchronization central device (shown in Figure 2 as the synchronization center). Optionally, the initial synchronization signal may be transmitted on a separate channel and divided into two signals, one referred to as the audio synchronization signal and the other as the biopotential synchronization signal. The two synchronization signals may be respectively superimposed on the biopotential signal (i.e., biopotential data) and the audio signal (i.e., audio data);” Para. [0034], “…the biopotential signal is aligned with the video signal based on the time synchronization relationship between the audio signal and the video signal as well as the alignment result of the biopotential signal with the audio signal).”) CN ‘703’s “initial synchronization signal” is such a “first data indicative of a first differential electrical signal” as claimed. CN ‘703’s “biopotential synchronization signal” is such a “second data indicative of a second differential electrical signal” as claimed. CN ‘703’s “audio synchronization signal” is such a “plurality of audio signals” as claimed. See Para. [0035] describing the “audio synchronization signal” being “played,” thus rendering is a “plurality of audio signals” as appears to be contemplated in view of Para. [0065] of the Present Specification. CN ‘703 does not disclose “a plurality of visible light signals.” CN ‘703 synchronizes video “based on the time synchronization relationship between the audio signal and the video signal as well as the alignment result of the biopotential signal with the audio signal” rather than using “a plurality of visible light signals.” This deficiency is addressed below. It is noted with respect to the consideration of whether Claim 1 recites patent eligible subject matter pursuant to 35 USC 101 that the above limitation is, by the Examiner’s interpretation, significant. The data and signals in the above method step are “generated” independently from any gathered data: they are, in a sense, arbitrary, as they serve predominantly as a time marker upon which subsequently recited analysis is based. Accordingly, they are not being interpreted to constitute either mere data gathering/outputting or any other extrasolution activity. Because generation of at least such light signals and audio signals as claimed cannot practically be performed in the human mind, Claim 1 does not recite a mental process. wherein the first data, the second data, the plurality of [video synchronization] signals and the plurality of audio signals are generated synchronously; (Para. [0035]; Para. [0034]) Per Para. [0035], CN ‘703’s “biopotential synchronization signal” and “audio synchronization signal” are derived from a single signal: CN ‘703’s “initial synchronization signal.” All three are thus generated synchronously. Per Para. [0034], CN ‘703’s “video signal” is aligned via “the time synchronization relationship between the audio signal and the video signal as well as the alignment result of the biopotential signal with the audio signal.” CN ‘703’s “video signal” is thus aligned via the same signal as both CN ‘703’s “biopotential synchronization signal” and “audio synchronization signal.” receiving, by a multi-channel amplifier, the first data and second data from the device and the EEG data of the patient from the plurality of sensors; (Para. [0035], “The biopotential acquisition device acquires a composite biopotential signal containing both the biopotential synchronization signal and the biopotential signal. The biopotential acquisition device may be, for example, a biopotential amplifier in a biopotential acquisition system.”); acquiring, by an audio acquisition device, the plurality of audio signals; (Para. [0035], “The audio signal and the audio synchronization signal output in sound form by the audio playback device can be acquired by the same audio acquisition device, which acquires a composite audio signal containing both the audio synchronization signal and the audio signal; the audio acquisition device may be, for example, a microphone.”); generating, by the audio acquisition device, the patient's audio data and fourth data based on the plurality of audio signals; (Para. [0035]; Para. [0036], “Those skilled in the art will appreciate that after acquiring the composite audio signal, it can be processed to identify the positions and waveforms of the audio synchronization signal and the audio signal.”); CN ‘703’s “composite audio signal” contains “the patient’s audio data.” CN ‘703’s “positions and waveforms of the audio synchronization signal and the audio signal” is such “fourth data based on the plurality of audio signals” as claimed. receiving, by the computing device, the first data, the second data and the patient's EEG data from the multi-channel amplifier, the third data and the patient's video data from the video acquisition device and the fourth data and the patient's audio data from the audio acquisition device; (Para. [0010]; Para. [0083], “Those skilled m the art will appreciate that by combining the units and algorithm steps of the various examples described herein, the present invention can be implemented by electronic hardware or by a combination of computer software and electronic hardware.”); comparing, by the computing device, the first data with the third data in order to calculate a first time compensation; (Para. [0010], “Step S347: Aligning the biopotential signal with the video signal based on the time synchronization relationship between the audio signal and the video signal and the alignment result of the biopotential signal with the audio signal…”); comparing, by the computing device, the second data with the fourth data in order to calculate a second time compensation; (Para. [0010], “Step S346: Aligning the biopotential signal with the audio signal based on the time stamp of the biopotential synchronization signal in the composite biopotential signal and the time stamp of the audio synchronization signal in the composite audio signal;”) and applying, by the computing device, the first time compensation to the patient's video data and the second time compensation to the patient's audio data. (Para. [0010], “Step S346” and “Step S347”). CN ‘703 does not disclose: acquiring, using a plurality of sensors positioned on the patient's scalp, the EEG data of the patient a plurality of visible light signals, wherein the first data, the second data, the plurality of visible light signals and the plurality of audio signals are generated synchronously acquiring, by a video acquisition device, the plurality of visible light signals; generating, by the video acquisition device, the patient's video data and third data based on the plurality of visible light signals; Tadi describes a “BRAIN ACTIVITY MEASUREMENT AND FEEDBACK SYSTEM” (Title). Tadi is reasonably pertinent to the problem faced by the inventor, and is thus analogous art. See MPEP 2141.01(a). Tadi teaches: acquiring, using a plurality of sensors positioned on the patient's scalp, the EEG data of the patient (Abstract, “A head set (2) comprises a brain electrical activity (EEG) sensing device (3) comprising EEG sensors (22) configured to be mounted on a head of a wearer so as to position the EEG sensors (22) at selected positions of interest over the wearers scalp….”). It would have been obvious for a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the method of CN ‘703 with the teachings of Tadi (i.e., to use such a plurality of sensors positioned on the patient’s scalp as taught by Tadi for EEG data acquisition in method of CN ‘703) because extensive research supports the superiority of such methodology (Tadi at Para. [0015], “The placement of EEG electrodes have been widely researched and one of the commonly used models is the so called ‘10-20 electrode placement system.’”). CN ‘261 describes “Audio-video delay testing method, apparatus, computer device, and storage medium” (Title). CN ‘261 is reasonably pertinent to the problem faced by the inventor, and is thus analogous art. See MPEP 2141.01(a). CN ‘261 teaches: a plurality of visible light signals, (Para. [0033]; Para. [0034], “…the optoelectronic conversion device 150, which is disposed over the playback interface of the first terminal 110, continuously collects the optical signal corresponding to the playback interface in real time, and converts the collected optical signal into the first audio signal via the photosensitive resistor in the device. The converted first audio signal and the received second audio signal are synthesized into a dual-channel audio signal via the dual-channel audio line. The computer device 140 receives the dual-channel audio signal and determines the audio-video playback delay corresponding to the first audio-video test sequence based on the first and second audio signals.”); wherein the first data, the second data, the plurality of visible light signals and the plurality of audio signals are generated synchronously (Para. [0033], “The video data and the audio signal corresponding to the first audio-video test sequence are synchronized.”); acquiring, by a video acquisition device, the plurality of visible light signals; (Para. [0034]); generating, by the video acquisition device, the patient's video data and third data based on the plurality of visible light signals; (Para. [0034]; Para. [0037]) It would have been obvious for a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the method of combined CN ‘703 and Tadi with the teachings of CN ‘261 (i.e., to use such a plurality of light signals for video synchronization as taught by CN ‘261 rather than merely basing video synchronization on a combination of audio signal synchronization and biopotential signal synchronization as taught by CN ‘703) in order to increase accuracy (CN ‘261 at Paras. [0002] through [0003]). Regarding Claim 2, the combination of CN ‘703, Tadi and CN ‘261 renders obvious the entirety of Claim 1 as explained above. CN ‘703 additionally teaches: wherein each of the first, second, … and fourth data is encoded with a predefined unique pattern of a plurality of pulses. (Para. [0040], “According to the embodiments of the present invention, the initial synchronization signal may be a switching signal or a continuous pulse signal.”). CN ‘261 additionally teaches: wherein each of the… third … data is encoded with a predefined unique pattern of a plurality of pulses. (Para. [0041], “Since the video data being played by the first terminal (which are part of the first audio-video test sequence) contain state changes, the photosensitive resistor captures the corresponding optical signal during each state change and converts the optical signal into a pulse signal in the first audio signal.”) Regarding Claim 3, the combination of CN ‘703, Tadi and CN ‘261 renders obvious the entirety of Claim 2 as explained above. CN ‘703 additionally teaches: wherein each of the plurality of pulses has an associated start time, end time and duration time (Para. [0040]; Para. [0041], discussing “time stamps”). Regarding Claim 4, the combination of CN ‘703, Tadi and CN ‘261 renders obvious the entirety of Claim 3 as explained above wherein the first and third data are determined to be out of sync with each other if a start time of a pulse in the first data is different from a start time of a corresponding pulse in the third data. (Para. [0041], “Exemplarily, the video signal and the biopotential signal each contain a time stamp, and the time stamps of the video signal and the biopotential signal are calibrated using the same signal source;” Para. [0067]). Regarding Claim 5, the combination of CN ‘703, Tadi and CN ‘261 renders obvious the entirety of Claim 4 as explained above wherein the first time compensation is calculated as a difference between the start time of the pulse in the first data and the start time of the corresponding pulse in the third data (Para. [0041], “Exemplarily, the video signal and the biopotential signal each contain a time stamp, and the time stamps of the video signal and the biopotential signal are calibrated using the same signal source;” Para. [0067]). Regarding Claim 6, the combination of CN ‘703, Tadi and CN ‘261 renders obvious the entirety of Claim 3 as explained above wherein the second and fourth data are determined to be out of sync with each other if a start time of a pulse in the second data is different from a start time of a corresponding pulse in the fourth data (Para. [0044], “According to the embodiments of the present invention, aligning the biopotential signal with the audio signal may include: calculating the difference between the time stamp of the biopotential synchronization signal in the composite biopotential signal and the time stamp of the audio synchronization signal in the composite audio signal, and correcting the difference between the time stamps of the biopotential signal and the audio signal to the calculated time stamp difference. Those skilled in the art will understand the implementation of aligning the biopotential signal with the audio signal based on the difference between the time stamps of the biopotential synchronization signal and the audio synchronization signal, which will not be further elaborated herein;” Paras. [0067] through [0068]). Regarding Claim 7, the combination of CN ‘703, Tadi and CN ‘261 renders obvious the entirety of Claim 6 as explained above wherein the second time compensation is calculated as a difference between the start time of the pulse in the second data and the start time of the corresponding pulse in the fourth data (Para. [0044], “According to the embodiments of the present invention, aligning the biopotential signal with the audio signal may include: calculating the difference between the time stamp of the biopotential synchronization signal in the composite biopotential signal and the time stamp of the audio synchronization signal in the composite audio signal, and correcting the difference between the time stamps of the biopotential signal and the audio signal to the calculated time stamp difference. Those skilled in the art will understand the implementation of aligning the biopotential signal with the audio signal based on the difference between the time stamps of the biopotential synchronization signal and the audio synchronization signal, which will not be further elaborated herein;” Paras. [0067] through [0068]). Regarding Claim 8, the combination of CN ‘703, Tadi and CN ‘261 renders obvious the entirety of Claim 2 as explained above CN ‘261 additionally teaches: wherein the predefined unique pattern has a total time duration (Fig. 4 shows a total time duration of a “pulse spectrum;” see Para. [0071]). Regarding Claim 9, the combination of CN ‘703, Tadi and CN ‘261 renders obvious the entirety of Claim 8 as explained above CN ‘703 additionally teaches: wherein the predefined unique pattern of said total time duration is repeated continuously over at least the period of time of monitoring the patient. (Para. [0010], “Step S347” includes “returning to Step S310” and repeating the process, during which the pulse pattern is repeated) Regarding Claim 10, the combination of CN ‘703, Tadi and CN ‘261 renders obvious the entirety of Claim 8 as explained above CN ‘261 additionally teaches: wherein the predefined unique pattern includes a first stream of pulses that is followed by a second stream of pulses and that is followed by a third stream of pulses. (Fig. 4; see Annotated Fig. 4, below). PNG media_image1.png 544 763 media_image1.png Greyscale Regarding Claim 11, the combination of CN ‘703, Tadi and CN ‘261 renders obvious the entirety of Claim 10 as explained above CN ‘261 additionally teaches: wherein the first stream includes a first number of single pulses that are spaced apart from each other by a first time interval, and wherein the first stream has a first time duration (Fig. 4; see Annotated Fig. 4, above). Regarding Claim 12, the combination of CN ‘703, Tadi and CN ‘261 renders obvious the entirety of Claim 11 as explained above. CN ‘261 additionally teaches: wherein the second stream includes a second number of dual pulses that are spaced apart from each other by a second time interval, and wherein the second stream has a second time duration (Fig. 4; see Annotated Fig. 4, above). Regarding Claim 13, the combination of CN ‘703, Tadi and CN ‘261 renders obvious the entirety of Claim 12 as explained above. CN ‘261 additionally teaches: wherein the third stream includes a third number of triple pulses that are spaced apart from each other by a third time interval, and wherein the third stream has a third time duration (Fig. 4; see Annotated Fig. 4, above). Regarding Claim 14, the combination of CN ‘703, Tadi and CN ‘261 renders obvious the entirety of Claim 13 as explained above. CN ‘261 additionally teaches: wherein the total time duration is a sum of the first, second and third time durations (Fig. 4; see Annotated Fig. 4, above). Regarding Claim 15, the combination of CN ‘703, Tadi and CN ‘261 renders obvious the entirety of Claim 1 as explained above CN ‘703 additionally teaches: and an acoustic generator for generating the plurality of audio signals (Para. [0035], “The audio signal and the audio synchronization signal output in sound form by the audio playback device can be acquired by the same audio acquisition device…”); CN ‘261 additionally teaches: wherein the device includes at least one light emitting diode for generating the plurality of visible light signals (Para. [0032], “The optoelectronic conversion device 150 comprises a photosensitive resistor and a first audio line connected to the photosensitive resistor, and is disposed over the playback interface of the first terminal 110.”) Regarding Claim 17, the combination of CN ‘703, Tadi and CN ‘261 renders obvious the entirety of Claim 1 as explained above wherein the first data is compared with the third data in order to calculate a first time compensation if the first and third data are determined be out of sync with each other (Para. [0067], “The time stamp offset represents the offset between the time stamps of the audio signal or video signal relative to the time stamp of the biopotential signal, that is, the difference between them. By recording the time stamp offset, the synchronization status among the audio signal, video signal, and biopotential signal can be recorded. Before any alignment is performed, the time stamp offset may be set to 0.”). Regarding Claim 18, the combination of CN ‘703, Tadi and CN ‘261 renders obvious the entirety of Claim 1 as explained above. wherein the second data is compared with the fourth data in order to calculate a second time compensation if the second and fourth data are determined be out of sync with each other (Para. [0068], “According to the embodiments of the present invention, aligning the biopotential signal with the audio signal may include calculating the difference between the time stamp of the biopotential synchronization signal in the composite biopotential signal and the time stamp of the audio synchronization signal in the composite audio signal, and correcting the time stamp offset according to the calculated time stamp difference.”). Claim 16 is rejected under 35 U.S.C. 103 as being unpatentable over CN 111631703 A (“CN ‘703”) in view of US 2018/0239430 A1 to Tadi et al. (“Tadi”) and CN 110753261 A (“CN ‘261”) as applied to Claim 2 above, and further in view of A. Ameera et al.; "Analysis of EEG Spectrum Bands Using Power Spectral Density for Pleasure and Displeasure State" 2019 IOP Conf. Ser.: Mater. Sci. Eng. 557 012030 (“Ameera”). Regarding Claim 16, the combination of CN ‘703, Tadi and CN ‘261 renders obvious the entirety of Claim 1 as explained above The combination of CN ‘703, Tadi and CN ‘261 does not disclose: wherein the each of the first and second differential signals has a frequency ranging from 20 Hz to 40 Hz and an amplitude of 1 mV Ameera describes “Analysis of EEG Spectrum Bands Using Power Spectral Density for Pleasure and Displeasure State” (Title). Ameera is analogous art. Ameera teaches: wherein the each of the first and second differential signals has a frequency ranging from 20 Hz to 40 Hz (Abstract, “Brainwaves is divided into 5 sub frequency bands namely alpha (8 – 13 Hz), beta (13 – 30 Hz), gamma (30 – 100 Hz), theta (4 – 8 Hz) and delta (1 – 4 Hz)”); Ameera’s range (i.e., 8 Hz to 100 Hz – the range Ameera describes as relevant to the same brain waves which are the subject of the claimed invention) overlaps the claimed range of 20 Hz to 40 Hz. In the case where the claimed ranges “overlap or lie inside ranges disclosed by the prior art” a prima facie case of obviousness exists. In reWertheim, 541 F.2d 257, 191 USPQ 90 (CCPA 1976). Although Ameera does not disclose the precise range of 20 Hz to 40 Hz claimed, it would have been obvious for a person of ordinary skill in the art to select any range within Ameera’s broader range of 8 Hz to 100 Hz, as so-doing would be likely to result in success and would require only routine optimization. and an amplitude of 1 mV (Pg. 3, Second Paragraph, “Each raw signals have amplitudes of microvolts.”). Ameera’s disclosed range of “microvolts” does not overlap with the prior art, but is merely close. A prima facie case of obviousness exists where the claimed ranges or amounts do not overlap with the prior art but are merely close. Titanium Metals Corp. of Americav.Banner, 778 F.2d 775, 783, 227 USPQ 773, 779 (Fed. Cir. 1985). It would have been obvious for a person of ordinary skill in the art to select the claimed amplitude of 1 mV from Ameera’s disclosed range of “microvolts” because so-doing would be likely to result in success and would require only routine optimization. It would have been obvious for a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the method of CN ‘703, Tadi and CN ‘261 with the teachings of Ameera (i.e., to use 20 Hz to 40 Hz and 1 mV as electrical characteristics of the first and second signals) in order to facilitate data processing as those characteristics are associated with the subject waveforms (Ameera at Abstract; Ameera at Pg. 1, Third Paragraph; Ameera at Pg. 3, Second Paragraph). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to CHRISTOPHER J MUTCHLER whose telephone number is (571)272-8012. The examiner can normally be reached M-F 7:00 am - 4:00 pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jennifer McDonald can be reached at 571-270-3061. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /C.J.M./Examiner, Art Unit 3796 /Jennifer Pitrak McDonald/Supervisory Patent Examiner, Art Unit 3796 1 CN 111631703 A was disclosed in the Third-Party Submission under 37 CFR 1.290 dated 4/3/2025. It is noted that citations herein are made with reference to the translation provided alongside that Third-Party Submission. 2 CN 110753261 A was disclosed in the Third-Party Submission under 37 CFR 1.290 dated 4/3/2025. It is noted that citations herein are made with reference to the translation provided alongside that Third-Party Submission.
Read full office action

Prosecution Timeline

Mar 27, 2024
Application Filed
Jan 07, 2026
Non-Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12599330
WEARABLE MEDICAL DEVICE WITH ZONELESS ARRHYTHMIA DETECTION
2y 5m to grant Granted Apr 14, 2026
Patent 12582332
PIEZOELECTRIC SENSOR WITH RESONATING MICROSTRUCTURES
2y 5m to grant Granted Mar 24, 2026
Patent 12576276
Amplitude Modulating Waveform Pattern Generation for Stimulation in an Implantable Pulse Generator
2y 5m to grant Granted Mar 17, 2026
Patent 12569671
DEVICE AND METHOD FOR DETERMINATION OF A CARDIAC OUTPUT FOR A CARDIAC ASSISTANCE SYSTEM
2y 5m to grant Granted Mar 10, 2026
Patent 12502520
Constraint Delivery – Hinge & Constraint Delivery - Corset
2y 5m to grant Granted Dec 23, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
47%
Grant Probability
65%
With Interview (+18.6%)
3y 5m
Median Time to Grant
Low
PTA Risk
Based on 47 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month