Prosecution Insights
Last updated: April 19, 2026
Application No. 19/237,931

TIME SYNCHRONIZATION FOR SHARED EXTENDED REALITY EXPERIENCES

Non-Final OA §103
Filed
Jun 13, 2025
Examiner
BOGALE, AMEN W
Art Unit
2628
Tech Center
2600 — Communications
Assignee
Snap Inc.
OA Round
1 (Non-Final)
74%
Grant Probability
Favorable
1-2
OA Rounds
2y 5m
To Grant
78%
With Interview

Examiner Intelligence

Grants 74% — above average
74%
Career Allow Rate
338 granted / 455 resolved
+12.3% vs TC avg
Minimal +4% lift
Without
With
+4.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 5m
Avg Prosecution
29 currently pending
Career history
484
Total Applications
across all art units

Statute-Specific Performance

§101
1.2%
-38.8% vs TC avg
§103
56.5%
+16.5% vs TC avg
§102
34.1%
-5.9% vs TC avg
§112
5.4%
-34.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 455 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 1. Claim(s) 1-4, 10-14, and 16-19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Carrigan et al (US 2020/0382872) in view of Pinto et al (US 12,353,790). As to claim 1, Carrigan teaches a method comprising: capturing, by a first environment ([0223] smartphone 604 can use data (e.g., from the audio tone, received from the first device (e.g., home media hub 602)), fig. 6 ); using the audio signal to determine a time offset between a first clock of the first ([0223] determine a time delay between a timestamp of when home media hub 602 outputted a signal representing the audio tone, and the timestamp of when smartphone 604 detected the tone); synchronizing, based on the time offset, the first clock and the second clock ([0223] adjusting the audio timing synchronization setting includes adding a time delay to audio output from audio output devices 608A and 608B, similar to as described above with respect to FIG. 7E, [0212]); and aligning virtual content that is simultaneously presented by the first ([0240] audio output from the third device and another device (e.g., a TV) appear to be in synchronization to a listener when the devices are outputting the same content simultaneously). Carrigan does not teach a first XR device and a second XR device as claimed. However, Pinto teaches a first XR device and a second XR device (Fig. 1B, Fig. 4, col. 8: FIG. 1B illustrates that the first avatar 12 is playing the virtual drum 14 (e.g., based on user-input of the first user 8), and as a result both the first and second user's headsets are playing back the audio content (e.g., sounds of drums being played) in synchronization). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to modify Carrigan to teach, an audio synchronization between XR devices, as suggested by Pinto. The motivation would have been in order to provide the “best user experience” (col. 5). As to claim 2, Carrigan teaches the method, further comprising: establishing a shared coordinate system between the first ([0240] audio output from the third device and another device (e.g., a TV) appear to be in synchronization to a listener when the devices are outputting the same content simultaneously). Carrigan does not teach the first XR device and the second XR device as claimed. However, Pinto teaches the first XR device and the second XR device (Fig. 1B, Fig. 4, col. 8: FIG. 1B illustrates that the first avatar 12 is playing the virtual drum 14 (e.g., based on user-input of the first user 8), and as a result both the first and second user's headsets are playing back the audio content (e.g., sounds of drums being played) in synchronization). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to modify Carrigan to teach, the first XR device and the second XR device, as suggested by Pinto. The motivation would have been in order to provide the “best user experience” (col. 5). As to claim 3, Carrigan teaches the method, further comprising: causing presentation of the virtual content by the first ([0240] audio output from the third device and another device (e.g., a TV) appear to be in synchronization to a listener when the devices are outputting the same content simultaneously). Carrigan does not teach the first XR device and the second XR device as claimed. However, Pinto teaches the first XR device and the second XR device (Fig. 1B, Fig. 4, col. 8: FIG. 1B illustrates that the first avatar 12 is playing the virtual drum 14 (e.g., based on user-input of the first user 8), and as a result both the first and second user's headsets are playing back the audio content (e.g., sounds of drums being played) in synchronization). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to modify Carrigan to teach, the first XR device and the second XR device, as suggested by Pinto. The motivation would have been in order to provide the “best user experience” (col. 5). As to claim 4, Carrigan teaches the method, wherein the audio signal comprises a first time-indexed audio signal based on the first clock ([0223] timestamp of when smartphone 604 detected the tone), and the using of the audio signal to determine the time offset comprises: receiving, from the second by the second ([0223] a timestamp of when home media hub 602 outputted a signal representing the audio tone); and comparing the first time-indexed audio signal and the second time-indexed audio signal to determine the time offset ([0223] determine a time delay between a timestamp of when home media hub 602 outputted a signal representing the audio tone, and the timestamp of when smartphone 604 detected the tone). Carrigan does not teach the first XR device and the second XR device as claimed. However, Pinto teaches the first XR device and the second XR device (Fig. 1B, Fig. 4, col. 8: FIG. 1B illustrates that the first avatar 12 is playing the virtual drum 14 (e.g., based on user-input of the first user 8), and as a result both the first and second user's headsets are playing back the audio content (e.g., sounds of drums being played) in synchronization). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to modify Carrigan to teach, the first XR device and the second XR device, as suggested by Pinto. The motivation would have been in order to provide the “best user experience” (col. 5). As to claim 10, Carrigan teaches at least one processor ([0175] computer processor); and at least one memory component storing instructions that, when executed by the at least one processor, configure the ([0175] Memory of…electronic device 500 can include one or more non-transitory computer-readable storage mediums, for storing computer-executable instructions, which, when executed by one or more computer processors 516… can cause the computer processors to perform the techniques…including process 900 (FIG. 9)) comprising: capturing an audio signal representing sound originating from a location associated with another ([0223] smartphone 604 can use data (e.g., from the audio tone, received from the first device (e.g., home media hub 602)), fig. 6 ); using the audio signal to determine a time offset between a first clock of the ([0223] determine a time delay between a timestamp of when home media hub 602 outputted a signal representing the audio tone, and the timestamp of when smartphone 604 detected the tone); synchronizing, based on the time offset, the first clock and the second clock ([0223] adjusting the audio timing synchronization setting includes adding a time delay to audio output from audio output devices 608A and 608B, similar to as described above with respect to FIG. 7E, [0212]); and aligning virtual content that is simultaneously presented by the ([0240] audio output from the third device and another device (e.g., a TV) appear to be in synchronization to a listener when the devices are outputting the same content simultaneously). Carrigan does not teach the XR device and the other XR device as claimed. However, Pinto teaches the XR device and the other XR device (Fig. 1B, Fig. 4, col. 8: FIG. 1B illustrates that the first avatar 12 is playing the virtual drum 14 (e.g., based on user-input of the first user 8), and as a result both the first and second user's headsets are playing back the audio content (e.g., sounds of drums being played) in synchronization). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to modify Carrigan to teach, an audio synchronization between XR devices, as suggested by Pinto. The motivation would have been in order to provide the “best user experience” (col. 5). As to claim 11, Carrigan teaches the ([0240] audio output from the third device and another device (e.g., a TV) appear to be in synchronization to a listener when the devices are outputting the same content simultaneously). Carrigan does not teach the XR device and the other XR device as claimed. However, Pinto teaches the XR device and the other XR device (Fig. 1B, Fig. 4, col. 8: FIG. 1B illustrates that the first avatar 12 is playing the virtual drum 14 (e.g., based on user-input of the first user 8), and as a result both the first and second user's headsets are playing back the audio content (e.g., sounds of drums being played) in synchronization). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to modify Carrigan to teach, the XR device and the other XR device, as suggested by Pinto. The motivation would have been in order to provide the “best user experience” (col. 5). As to claim 12, Carrigan teaches the ([0240] audio output from the third device and another device (e.g., a TV) appear to be in synchronization to a listener when the devices are outputting the same content simultaneously). Carrigan does not teach the XR device and the other XR device as claimed. However, Pinto teaches the XR device and the other XR device (Fig. 1B, Fig. 4, col. 8: FIG. 1B illustrates that the first avatar 12 is playing the virtual drum 14 (e.g., based on user-input of the first user 8), and as a result both the first and second user's headsets are playing back the audio content (e.g., sounds of drums being played) in synchronization). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to modify Carrigan to teach, the XR device and the other XR device, as suggested by Pinto. The motivation would have been in order to provide the “best user experience” (col. 5). As to claim 13, Carrigan teaches the ([0223] timestamp of when smartphone 604 detected the tone), and the using of the audio signal to determine the time offset comprises: receiving, from the other ([0223] a timestamp of when home media hub 602 outputted a signal representing the audio tone); and comparing the first time-indexed audio signal and the second time-indexed audio signal to determine the time offset ([0223] determine a time delay between a timestamp of when home media hub 602 outputted a signal representing the audio tone, and the timestamp of when smartphone 604 detected the tone). Carrigan does not teach the XR device and the other XR device as claimed. However, Pinto teaches the XR device and the other XR device (Fig. 1B, Fig. 4, col. 8: FIG. 1B illustrates that the first avatar 12 is playing the virtual drum 14 (e.g., based on user-input of the first user 8), and as a result both the first and second user's headsets are playing back the audio content (e.g., sounds of drums being played) in synchronization). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to modify Carrigan to teach, the XR device and the other XR device, as suggested by Pinto. The motivation would have been in order to provide the “best user experience” (col. 5). As to claim 16, Carrigan teaches at least one non-transitory computer-readable storage medium including instructions that, when executed by at least one processor, cause the at least one processor to perform operations ([0175] Memory of…electronic device 500 can include one or more non-transitory computer-readable storage mediums, for storing computer-executable instructions, which, when executed by one or more computer processors 516… can cause the computer processors to perform the techniques…including process 900 (FIG. 9)) comprising: capturing, by a first ([0223] smartphone 604 can use data (e.g., from the audio tone, received from the first device (e.g., home media hub 602)), fig. 6 ); using the audio signal to determine a time offset between a first clock of the first ([0223] determine a time delay between a timestamp of when home media hub 602 outputted a signal representing the audio tone, and the timestamp of when smartphone 604 detected the tone); synchronizing, based on the time offset, the first clock and the second clock ([0223] adjusting the audio timing synchronization setting includes adding a time delay to audio output from audio output devices 608A and 608B, similar to as described above with respect to FIG. 7E, [0212]); and aligning virtual content that is simultaneously presented by the first and the second clock ([0240] audio output from the third device and another device (e.g., a TV) appear to be in synchronization to a listener when the devices are outputting the same content simultaneously). Carrigan does not teach a first XR device and a second XR device as claimed. However, Pinto teaches a first XR device and a second XR device (Fig. 1B, Fig. 4, col. 8: FIG. 1B illustrates that the first avatar 12 is playing the virtual drum 14 (e.g., based on user-input of the first user 8), and as a result both the first and second user's headsets are playing back the audio content (e.g., sounds of drums being played) in synchronization). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to modify Carrigan to teach, an audio synchronization between XR devices, as suggested by Pinto. The motivation would have been in order to provide the “best user experience” (col. 5). As to claim 17, Carrigan teaches the at least one non-transitory computer-readable medium, the operations further comprising: establishing a shared coordinate system between the first ([0240] audio output from the third device and another device (e.g., a TV) appear to be in synchronization to a listener when the devices are outputting the same content simultaneously). Carrigan does not teach the first XR device and the second XR device as claimed. However, Pinto teaches the first XR device and the second XR device (Fig. 1B, Fig. 4, col. 8: FIG. 1B illustrates that the first avatar 12 is playing the virtual drum 14 (e.g., based on user-input of the first user 8), and as a result both the first and second user's headsets are playing back the audio content (e.g., sounds of drums being played) in synchronization). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to modify Carrigan to teach, the first XR device and the second XR device, as suggested by Pinto. The motivation would have been in order to provide the “best user experience” (col. 5). As to claim 18, Carrigan teaches the at least one non-transitory computer-readable medium, the operations further comprising: causing presentation of the virtual content by the first ([0240] audio output from the third device and another device (e.g., a TV) appear to be in synchronization to a listener when the devices are outputting the same content simultaneously). Carrigan does not teach the first XR device and the second XR device as claimed. However, Pinto teaches the first XR device and the second XR device (Fig. 1B, Fig. 4, col. 8: FIG. 1B illustrates that the first avatar 12 is playing the virtual drum 14 (e.g., based on user-input of the first user 8), and as a result both the first and second user's headsets are playing back the audio content (e.g., sounds of drums being played) in synchronization). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to modify Carrigan to teach, the first XR device and the second XR device, as suggested by Pinto. The motivation would have been in order to provide the “best user experience” (col. 5). As to claim 19, Carrigan teaches the at least one non-transitory computer-readable medium, wherein the audio signal comprises a first time-indexed audio signal based on the first clock ([0223] timestamp of when smartphone 604 detected the tone), and the using of the audio signal to determine the time offset comprises: receiving, from the second ([0223] a timestamp of when home media hub 602 outputted a signal representing the audio tone); and comparing the first time-indexed audio signal and the second time-indexed audio signal to determine the time offset ([0223] determine a time delay between a timestamp of when home media hub 602 outputted a signal representing the audio tone, and the timestamp of when smartphone 604 detected the tone). Carrigan does not teach the first XR device and the second XR device as claimed. However, Pinto teaches the first XR device and the second XR device (Fig. 1B, Fig. 4, col. 8: FIG. 1B illustrates that the first avatar 12 is playing the virtual drum 14 (e.g., based on user-input of the first user 8), and as a result both the first and second user's headsets are playing back the audio content (e.g., sounds of drums being played) in synchronization). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to modify Carrigan to teach, the first XR device and the second XR device, as suggested by Pinto. The motivation would have been in order to provide the “best user experience” (col. 5). 2. Claim(s) 5, 14, and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Carrigan et al (US 2020/0382872) in view of Pinto et al (US 12,353,790) and further in view of Gossard et al (US 2020/0302948). As to claim 5, Carrigan in view of Pinto do not teach the method including a cross-correlation coefficient as claimed. However, Gossard teaches the method, wherein the comparing of the first time-indexed audio signal and the second time-indexed audio signal comprises: determining a cross-correlation coefficient ([0114] cross-correlation filter); and identifying the time offset based on the cross-correlation coefficient ([0114] the cross-correlation filter calculates the time delay between two different audio signals). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to modify Carrigan in view of Pinto to teach, calculating a time delay between audio signals based on a cross-correlation, as suggested by Gossard. The motivation would have been in order to improve, “the audio quality of live performances for listeners who hear audio reproduced by loudspeakers at live performance venues” ([0015]). As to claim 14, while Carrigan in view of Pinto teach the XR device (Pinto: Fig. 1B, Fig. 4, col. 8: FIG. 1B illustrates that the first avatar 12 is playing the virtual drum 14 (e.g., based on user-input of the first user 8), and as a result both the first and second user's headsets are playing back the audio content (e.g., sounds of drums being played) in synchronization), Carrigan in view of Pinto do not teach the XR device including a cross-correlation coefficient as claimed. However, Gossard teaches the device, wherein the comparing of the first time-indexed audio signal and the second time-indexed audio signal comprises: determining a cross-correlation coefficient ([0114] cross-correlation filter); and identifying the time offset based on the cross-correlation coefficient ([0114] the cross-correlation filter calculates the time delay between two different audio signals). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to modify Carrigan in view of Pinto to teach, calculating a time delay between audio signals based on a cross-correlation, as suggested by Gossard. The motivation would have been in order to improve, “the audio quality of live performances for listeners who hear audio reproduced by loudspeakers at live performance venues” ([0015]). As to claim 20, Carrigan in view of Pinto do not teach the at least one non-transitory computer-readable medium including a cross-correlation coefficient as claimed. However, Gossard teaches the at least one non-transitory computer-readable medium, wherein the comparing of the first time-indexed audio signal and the second time-indexed audio signal comprises: determining a cross-correlation coefficient ([0114] cross-correlation filter); and identifying the time offset based on the cross-correlation coefficient ([0114] the cross-correlation filter calculates the time delay between two different audio signals). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to modify Carrigan in view of Pinto to teach, calculating a time delay between audio signals based on a cross-correlation, as suggested by Gossard. The motivation would have been in order to improve, “the audio quality of live performances for listeners who hear audio reproduced by loudspeakers at live performance venues” ([0015]). 3. Claim(s) 6 and 15 is/are rejected under 35 U.S.C. 103 as being unpatentable over Carrigan et al (US 2020/0382872) in view of Pinto et al (US 12,353,790) and further in view of Saulters (US 2014/0328485). As to claim 6, while Carrigan in view of Pinto teach the first XR device and the second XR device (Pinto: Fig. 1B, Fig. 4, col. 8: FIG. 1B illustrates that the first avatar 12 is playing the virtual drum 14 (e.g., based on user-input of the first user 8), and as a result both the first and second user's headsets are playing back the audio content (e.g., sounds of drums being played) in synchronization), Carrigan in view of Pinto do not teach the method including compensating the audio latency based on the distance as claimed. However, Saulters teaches the method, further comprising: determining a distance between the first device and the second device in the environment ([0033] a calculated distance between a mobile device user-attendee and a particular loudspeaker); and adjusting the time offset to compensate for audio latency based on the distance between the first device and the second device in the environment ([0033] a time delay can be determined and compensated based on a calculated distance between a mobile device user-attendee and a particular loudspeaker, Fig. 1). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to modify Carrigan in view of Pinto to teach, the method including compensating for audio latency, as suggested by Saulters. The motivation would have been in order to provide “a mechanism to deliver high quality live sound to an audience at a concert event without removing the feel of being in a live event” ([0004]). As to claim 15, while Carrigan in view of Pinto teach the XR device and the other XR device (Pinto: Fig. 1B, Fig. 4, col. 8: FIG. 1B illustrates that the first avatar 12 is playing the virtual drum 14 (e.g., based on user-input of the first user 8), and as a result both the first and second user's headsets are playing back the audio content (e.g., sounds of drums being played) in synchronization), Carrigan in view of Pinto do not teach the XR devices including compensating the audio latency based on the distance as claimed. However, Saulters teaches the device, the operations further comprising: determining a distance between the device and the other device in the environment ([0033] a calculated distance between a mobile device user-attendee and a particular loudspeaker); and adjusting the time offset to compensate for audio latency based on the distance between the device and the other device in the environment ([0033] a time delay can be determined and compensated based on a calculated distance between a mobile device user-attendee and a particular loudspeaker, Fig. 1). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to modify Carrigan in view of Pinto to teach, teach the devices including compensating the audio latency based on the distance, as suggested by Saulters. The motivation would have been in order to provide “a mechanism to deliver high quality live sound to an audience at a concert event without removing the feel of being in a live event” ([0004]). 4. Claim(s) 7 is/are rejected under 35 U.S.C. 103 as being unpatentable over Carrigan et al (US 2020/0382872) in view of Pinto et al (US 12,353,790) and further in view of Saulters (US 2014/0328485) and further in view of Hardie et al (US 11,875,820). As to claim 7, while Carrigan in view of Pinto and further in view of Saulters teach the first XR device and the second XR device (Pinto: Fig. 1B, Fig. 4, col. 8: FIG. 1B illustrates that the first avatar 12 is playing the virtual drum 14 (e.g., based on user-input of the first user 8), and as a result both the first and second user's headsets are playing back the audio content (e.g., sounds of drums being played) in synchronization), Carrigan in view of Pinto and further in view of Saulters do not teach the method including a microphone array to determine a distance as claimed. However, Hardie teaches the method, wherein the first device comprises a microphone array, and the determining of the distance comprises using the microphone array to perform sound source localization (SSL) (col. 16: Software components of the speech interface device 108 may also include a sound source localization (SSL) component 224 that may be used to determine the distance of the user 104 from the speech interface device 108. The SSL component 224 is configured to analyze differences in arrival times of received sound at the respective microphones of the microphone array 200). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to modify Carrigan in view of Pinto and Saulters to teach, the method including a microphone array to determine a distance, as suggested by Hardie. The motivation would have been in order to provide “a mechanism to deliver high quality live sound to an audience at a concert event without removing the feel of being in a live event” ([0004]). 5. Claim(s) 8 is/are rejected under 35 U.S.C. 103 as being unpatentable over Carrigan et al (US 2020/0382872) in view of Pinto et al (US 12,353,790) and further in view of Ohta (US 2009/0232318). As to claim 8, while Carrigan in view of Pinto teach the first XR device and the second XR device (Pinto: Fig. 1B, Fig. 4, col. 8: FIG. 1B illustrates that the first avatar 12 is playing the virtual drum 14 (e.g., based on user-input of the first user 8), and as a result both the first and second user's headsets are playing back the audio content (e.g., sounds of drums being played) in synchronization), Carrigan in view of Pinto do not teach the method including a predetermined sound. However, Ohta teaches the method, wherein the sound comprises a predetermined sound generated by the second device (a first speaker for outputting a first audio signal including a first test signal, [0044]). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to modify Carrigan in view of Pinto to teach, the method including a microphone array to determine a distance, as suggested by Ohta. The motivation would have been in order improve user’s interaction with the electronic devices. 6. Claim(s) 9 is/are rejected under 35 U.S.C. 103 as being unpatentable over Carrigan et al (US 2020/0382872) in view of Pinto et al (US 12,353,790) and further in view of Hsueh (US 2021/0076096). As to claim 9, while Carrigan in view of Pinto teach the first XR device and the second XR device (Pinto: Fig. 1B, Fig. 4, col. 8: FIG. 1B illustrates that the first avatar 12 is playing the virtual drum 14 (e.g., based on user-input of the first user 8), and as a result both the first and second user's headsets are playing back the audio content (e.g., sounds of drums being played) in synchronization), Carrigan in view of Pinto do not teach the method including a prompt as claimed. However, Hsueh teaches the method, wherein the sound comprises a predetermined sound generated by a user of the second device based on a prompt provided by the first device or the second device (prompting the user to provide a test audio, claim 2, [0066]). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to modify Carrigan in view of Pinto to teach, the method including a prompting, as suggested by Hsueh. The motivation would have been in order improve the quality of user’s interaction with the electronic devices. Conclusion . Any inquiry concerning this communication or earlier communications from the examiner should be directed to AMEN W BOGALE whose telephone number is (571)270-1579. The examiner can normally be reached M-F 10:AM-6:PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Nitin Patel can be reached at (571)272-7677. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /AMEN W BOGALE/ Examiner, Art Unit 2628 /NITIN PATEL/ Supervisory Patent Examiner, Art Unit 2628
Read full office action

Prosecution Timeline

Jun 13, 2025
Application Filed
Mar 06, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12592197
DISPLAY SUBSTRATE AND DISPLAY APPARATUS
2y 5m to grant Granted Mar 31, 2026
Patent 12586535
Display having Semiconducting Oxide Gate Driver Circuitry With Bottom Gate Terminals for Reduced Leakage
2y 5m to grant Granted Mar 24, 2026
Patent 12588350
DISPLAY DEVICE
2y 5m to grant Granted Mar 24, 2026
Patent 12586537
DISPLAY APPARATUS
2y 5m to grant Granted Mar 24, 2026
Patent 12562127
SCREEN DRIVE CIRCUIT, DISPLAY, AND ELECTRONIC DEVICE
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
74%
Grant Probability
78%
With Interview (+4.0%)
2y 5m
Median Time to Grant
Low
PTA Risk
Based on 455 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month