Prosecution Insights
Last updated: April 19, 2026
Application No. 19/109,523

AUGMENTED REALITY SYSTEMS, DEVICES AND METHODS

Non-Final OA §102§103
Filed
Mar 06, 2025
Examiner
XIE, KWIN
Art Unit
2626
Tech Center
2600 — Communications
Assignee
Artificial Intelligence Centre Of Excellence Pty Ltd.
OA Round
1 (Non-Final)
64%
Grant Probability
Moderate
1-2
OA Rounds
2y 7m
To Grant
96%
With Interview

Examiner Intelligence

Grants 64% of resolved cases
64%
Career Allow Rate
277 granted / 435 resolved
+1.7% vs TC avg
Strong +32% interview lift
Without
With
+32.1%
Interview Lift
resolved cases with interview
Typical timeline
2y 7m
Avg Prosecution
16 currently pending
Career history
451
Total Applications
across all art units

Statute-Specific Performance

§101
1.5%
-38.5% vs TC avg
§103
50.0%
+10.0% vs TC avg
§102
44.0%
+4.0% vs TC avg
§112
3.3%
-36.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 435 resolved cases

Office Action

§102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Acknowledgment is made of applicant’s claim for foreign priority under 35 U.S.C. 119 (a)-(d). The certified copy has been filed in Australian parent Application Nos. AU2022902305, AU2022902303, and AU2022902304, all filed on August 14, 2022. Information Disclosure Statement The information disclosure statement (IDS) submitted on March 6, 2025 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Response to Amendment As a result of the Preliminary Amendment filed on March 6, 2025, claims 1-13, 16-19, 21, 31 and 33 are pending. Claims 4, 5, 7-11, 21, 31 and 33 are amended. Claims 14-15, 20, 22-30, 32 and 34-41 are canceled. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claim(s) 1-5, 7-10, 13, 16-19 and 21 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Ha et al., United States Patent Application Publication No. US 2022/0043354 A1. Regarding claim 1, Ha discloses a method of using augmented reality (AR) to connect a remote expert with a front line worker (See Ha, generally, Figs.1-5, generally, Abstract, Summary), the method comprising: receiving, by a processor, a first video stream from a 2D camera at a first location where the remote expert is located (Fig. 4, S10, Detailed Description, [0072-0080], “Referring to FIG. 4, the three-dimensional (3D) camera of the tabletop 200 is configured to detect 3D motion in the upper space of the touchscreen…By default, the 3D depth information is used to identify each working tool, each office tool (pen or eraser), and the hand of a remote expert and to store the motion data continuously… First, a process of obtaining images of the upper space of the touchscreen among input images from the 3D camera is performed S10.”); processing the first video stream in real time, by a processor, to determine 3D coordinates of a hand of the remote expert (Fig. 5, S20, Detailed Description, [0075-0080], In other words, a non-rigid object such as the hand is first identified, and then a procedure for re-confirming whether the identified object is a preconfigured object by using 3D depth information may be performed. Also, for a rigid object such as the working tool or office tool, a procedure for detecting whether the identified object is a preconfigured object by using 3D depth information may be performed….Next, a process of configuring a further specific region of interest (ROI) of the touchscreen for learning and recognition purposes is performed S20.”); mapping, by a processor, an AR model to the 3D coordinates of the hand of the remote expert (Fig. 4, S30-S50; See also Detailed Description, [0052], “The server 300 relays data between the augmented reality glasses 100 and the tabletop 200; matches coordinates of an augmented guide corresponding to the hand motion information and instructions so that the augmented guide is displayed on the corresponding position of a physical object in the actual image information; and transmits the matched coordinates to the augmented reality glasses 100 in real-time.”; Detailed Description, [0083]); and rendering the AR model in real time as a first AR image in a AR headset at a second location at which the front line worker is located, the second location being remote from the first location (Fig. 4, S50; See also Detailed Description, [0065], “At this time, when the remote expert draws instructions around a selected physical object by using a pen, the instructions are also displayed on the transparent display of the field operator by being drawn around the physical object as an augmented guide.”; See also Detailed Description, [0083], “Finally, a process of transforming spatial coordinates of the upper part of the touchscreen into the coordinate system of the augmented reality glasses 100 and displaying an augmented guide on a transparent display of the augmented reality glasses 100 is performed S50.”). Regarding claim 2, Ha further discloses wherein the mapping of the AR model to the 3D coordinates of the hand of the remote expert includes converting the 3D coordinates to the frame of reference of the front line worker (Fig. 4, S30-S50; See also Detailed Description, [0052], “The server 300 relays data between the augmented reality glasses 100 and the tabletop 200; matches coordinates of an augmented guide corresponding to the hand motion information and instructions so that the augmented guide is displayed on the corresponding position of a physical object in the actual image information; and transmits the matched coordinates to the augmented reality glasses 100 in real-time.”; Detailed Description, [0083]). Regarding claim 3, Ha discloses a method further comprising capturing a second video stream of a field of view of the front line worker with a camera of the AR headset of the front line worker, inserting a second AR image corresponding to the first AR image into the second video stream and displaying the second video stream and second AR image on a display of a computing device used by the remote expert (Detailed Description, [0049], “Referring again to FIG. 2, the augmented reality glasses 100 are structured to be worn by a field operator, to be equipped with a video camera for capturing on-site, actual image information, and to display an augmented guide on a transparent display.”; See next Detailed Description, [0064-0070], “The working area is an area where actual image information transmitted by a field operator is displayed and at the same time, an effective area from which a three-dimensional camera detects objects. When a remote expert draws instructions on the working area by using a pen or an eraser, the instructions are displayed on a transparent display of the field operator as an augmented guide…. At this time, when the remote expert draws instructions around a selected physical object by using a pen, the instructions are also displayed on the transparent display of the field operator by being drawn around the physical object as an augmented guide.”). Regarding claim 4, Ha discloses wherein inserting the second AR image in the second video stream comprises mapping the 3D coordinates of the hand of the expert to the field of view of the front line worker (See Detailed Description, [0064-0070][0083]). Regarding claim 5, Ha discloses wherein the first AR image is a 3D representation of one of the remote expert’s hands (see Figs. 6-10; Detailed Description, [0065-0070], “Also, if either of the hand of the remote expert and the tool is recognized, the tabletop 200 is automatically switched to an action guide mode and transmits hand motion video or motion video of the tool.”; See next Detailed Description, [0085-0100]). Regarding claim 7, Ha discloses wherein determining the 3D coordinates of the remote expert's hand includes determining coordinates of hand joints of the remote expert's hands and wherein the AR headset renders a 3D image of the remote expert's hand and hand joint movements in real time (Detailed Description, [0053-0056], “The server 300 relays data between the augmented reality glasses 100 and the tabletop 200; matches coordinates of an augmented guide corresponding to the hand motion information and instructions so that the augmented guide is displayed on the corresponding position of a physical object in the actual image information; and transmits the matched coordinates to the augmented reality glasses 100 in real-time.”). Regarding claim 8, Ha discloses claim 1 wherein the 3D coordinates comprise a point cloud generated from pixels of a 2D image of the hand captured by the 2D camera (Fig. 8-10, Detailed Description, [0094-0116], “Referring to FIG. 8, hand gesture may be recognized based on the touchscreen coordinates and 3D depth data. The trajectory of 2D coordinates of the fingertip moving on the touchscreen are stored in time order, and the 2D time series coordinates are transmitted based on WebRTC.”). Regarding claim 9, Ha discloses wherein a computing device of the remote expert at the first location comprises said 2D camera, a processor and a display for displaying the second video stream of the field of view of the front liner worker and the second AR image to the remote expert (Detailed Description, [0094-0100]). Regarding claim 10, Ha discloses wherein a computing device of the remote expert sends the 3D coordinates of the remote expert's hand to the AR headset of the front liner worker and wherein the AR headset maps the AR model to the 3D coordinates of the hand of the remote expert (Fig. 4, S30-S50; See also Detailed Description, [0052-0056], “The server 300 relays data between the augmented reality glasses 100 and the tabletop 200; matches coordinates of an augmented guide corresponding to the hand motion information and instructions so that the augmented guide is displayed on the corresponding position of a physical object in the actual image information; and transmits the matched coordinates to the augmented reality glasses 100 in real-time.”; Detailed Description, [0073-0083], “Also, the 3D camera of the tabletop 200 may be configured to identify the hand of the remote expert by using color information and identify each working tool and each office tool (pen or eraser) by using the 3D depth information, after which the motion data are stored continuously.“). Regarding claim 13, Ha discloses the method further comprising the AR headset sending the second video stream of the field of view of the front line worker to the computing device of the remote expert and integrating the AR image into the second video stream (Figs. 12-13, Detailed Description, [0116-0128], “The left-side front camera 121 is installed at the left-side of the glasses and obtains actual image information at the front. Also, the right-side front camera 122 is installed at the right-side of the glasses and obtains actual image information at the front.... The left-side 3D sensor 131 and the right-side 3D sensor 132 operates so as to capture 3D images of the front in conjunction with the left-side camera 121 and the right-side camera 122. In other words, the captured 3D images may be stored in an internal memory or transmitted to the server 300. It should be noted that depending on embodiments, only one front camera and only one 3D sensor may be disposed to obtain actual image information. It is preferable that the front camera is configured to capture images from both of the infrared and visible regions.”; See also Fig., 3 captured image from augmented reality glasses’). Regarding claim 16, Ha discloses a remote expert computing device (Figs. 1-2, tabletop, #200) comprising a display (Figs. 1-2, touchscreen), a 2D camera (Fig. 8-10, Detailed Description, [0094-0116], “Referring to FIG. 8, hand gesture may be recognized based on the touchscreen coordinates and 3D depth data. The trajectory of 2D coordinates of the fingertip moving on the touchscreen are stored in time order, and the 2D time series coordinates are transmitted based on WebRTC.”) and processor (Figs. 1-2, tabletop, #200), it is inherent that the tabletop and touchscreen would have a processor) and a computer readable storage medium (See Related Art, [0005; Detailed Description,[0073][0089-0090]) storing data storing instructions executable by the processor to: receive, by the processor from the 2D camera, a first video stream including a hand of the remote expert (Fig. 8-10, Detailed Description, [0094-0116], “Referring to FIG. 8, hand gesture may be recognized based on the touchscreen coordinates and 3D depth data. The trajectory of 2D coordinates of the fingertip moving on the touchscreen are stored in time order, and the 2D time series coordinates are transmitted based on WebRTC.”); process the first video stream in real time to determine 3D coordinates of the hand of the remote expert (Fig. 4, Figs. 8-10, Detailed Description, [0094-0116], “…, hand gesture may be recognized based on the touchscreen coordinates and 3D depth data. “); and send the determined 3D coordinates of the hand of the remote expert to an AR headset of a front line worker (Fig. 4, Figs. 8-10, Detailed Description, [0094-0116] ,“ In other words, a video in which a remote expert selects a physical object (component) from the tabletop 200, augments the selected object as a 3D model, and manipulates manually the process of assembling and disassembling the corresponding component may be transmitted in real-time to the augmented reality glasses 100 of a field operator to be used as an intuitive augmented guide.”). Regarding claim 17, Ha discloses wherein the instructions further comprise instructions to receive a second video stream of the field of view of the front line worker from a camera of the AR headset together with an AR image whose movements are based on movements of the remote expert's hand and displayed the received second video stream and AR image on the display of the remote expert computing device (Figs. 12-13, Detailed Description, [0116-0128], “The left-side front camera 121 is installed at the left-side of the glasses and obtains actual image information at the front. Also, the right-side front camera 122 is installed at the right-side of the glasses and obtains actual image information at the front.... The left-side 3D sensor 131 and the right-side 3D sensor 132 operates so as to capture 3D images of the front in conjunction with the left-side camera 121 and the right-side camera 122. In other words, the captured 3D images may be stored in an internal memory or transmitted to the server 300. It should be noted that depending on embodiments, only one front camera and only one 3D sensor may be disposed to obtain actual image information. It is preferable that the front camera is configured to capture images from both of the infrared and visible regions.”; See also Figs. 3-9, captured image from augmented reality glasses). Regarding claim 18, Ha discloses an AR headset comprising a screen through which a front line worker can view the real world and AR images overlaid onto the view of the real world by the AR headset (Fig. 11-12, transparent display, #110); a camera for capturing a field of view of the front line worker (Fig. 12, left-side front camera, right-side front camera, #121/122), a processor (Fig. 12, controller, #150) and computer readable storage medium storing instructions (Detailed Description, [0121]) executable by the processor to: receive, in real time, 3D coordinates of a remote expert's hand (Fig. 4, Figs. 8-10, Detailed Description, [0094-0116], “…, hand gesture may be recognized based on the touchscreen coordinates and 3D depth data. “), ; map, in real time, an AR model to the 3D coordinates of the remote expert's hand (Fig. 4, S30-S50; See also Detailed Description, [0052], “The server 300 relays data between the augmented reality glasses 100 and the tabletop 200; matches coordinates of an augmented guide corresponding to the hand motion information and instructions so that the augmented guide is displayed on the corresponding position of a physical object in the actual image information; and transmits the matched coordinates to the augmented reality glasses 100 in real-time.”; See also Figs. 8-10, Detailed Description, [0083-0116]); and render the AR model in real time as a first AR image viewable by the wearer of the AR headset (Fig. 4, Figs. 8-10, Detailed Description, [0094-0116] ,“In other words, a video in which a remote expert selects a physical object (component) from the tabletop 200, augments the selected object as a 3D model, and manipulates manually the process of assembling and disassembling the corresponding component may be transmitted in real-time to the augmented reality glasses 100 of a field operator to be used as an intuitive augmented guide.”). Regarding claim 19, this is met by the rejection to claim 2. Regarding claim 21, Ha discloses the AR headset further comprising: a light wave emission device for projecting light onto an area in front of the AR headset to illuminate a target subject (Detailed Description, [0140-0144], “Also, to improve a command recognition rate of the recognition camera 145, artificial eyebrows for instructions may be attached to the eyebrows of the user. The artificial eyebrows for instructions may be coated with reflective paint that reflects infrared light in a predetermined range, and the recognition camera 145 may be configured to recognize the infrared light, thereby improving the command recognition rate.”); a communications module for receiving lighting control instructions from a remote user (Figs. 12, communications module, #142); and a processor for controlling the light wave emission device in accordance with the lighting control instructions from the remote user (Detailed Description, [0140-0144], “It should be noted that since the recognition camera 145 is capable of detecting movement of the eyes of the user, gazing direction of the eyes, and size change of the eyes, an operation command may be instructed based on the size change of the eyes.”). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 6, 11, 12, 31 and 33 are rejected under 35 U.S.C. 103 as being unpatentable over Ha in view of Lang, United States Patent Application Publication No. US 2022/0079675 A1. Regarding claim 6, Ha discloses every element of claim 5 but does not explicitly disclose wherein the first AR image is a hologram. Lang, in a similar field of endeavor, discloses a method of using augmented reality (AR) to connect a remote expert with a front line worker (See Lang, generally, Summary, Abstract) wherein the first AR image is a hologram (Figs. 1-3, Detailed Description, [0192][0229], “The virtual surgical plan 45 can be registered in the common coordinate system 44. The surgical site 46 can be registered in the common coordinate system 44. Intra-operative measurements 47 can be obtained and can be used for generating a virtual surgical plan 45. An optical head mounted display 48 can project or display digital holograms of virtual data or virtual data 49 superimposed onto and aligned with the surgical site”). It would have been obvious to one of ordinary skill in the art to have modified the first AR image of Ha to include the teachings of Lang to provide the first image as a hologram. The motivation to combine these art is to project digital holograms to be superimposed onto the surgical site or the front line site, which also can add shared holographic experience (Figs. 1-3, Detailed Description, [0192][0229]) The fact that Lang and Ha disclose similar systems of AR to provide remote expert guidance, makes this combination more easily implemented. Regarding claim 11, Ha discloses the method further comprising converting the 3D coordinates of the hand of the remote expert to the frame of reference of the AR headset. (Fig. 4, S30-S50; See also Detailed Description, [0052], “The server 300 relays data between the augmented reality glasses 100 and the tabletop 200; matches coordinates of an augmented guide corresponding to the hand motion information and instructions so that the augmented guide is displayed on the corresponding position of a physical object in the actual image information; and transmits the matched coordinates to the augmented reality glasses 100 in real-time.”; Detailed Description, [0083]). Ha does not explicitly disclose comprising determining an orientation or pose of the AR headset of the front line worker Lang, in a similar field of endeavor, discloses a method of using augmented reality (AR) to connect a remote expert with a front line worker (See Lang, generally, Summary, Abstract) comprising determining an orientation or pose of the AR headset of the front line worker (Summary, [0009], “…some embodiments, the system comprises at least one camera integrated into or attached to the see through optical head mounted display. In some embodiments, at least one camera is separate from the optical head mounted display. In some embodiments, the one or more cameras are configured to determine the position, orientation, or position and orientation of the marker. In some embodiments, the one or more cameras are configured to determine one or more coordinates of the marker. In some embodiments, the one or more cameras are configured to track the one or more coordinates of the marker during movement of the marker”’ See also Detailed Description, [0051-0054]). It would have been obvious to have modified the method of Ha to include the teachings of Lang in such a way to ad comprising determining an orientation or pose of the AR headset of the front line worker. The motivation to combine these arts is to use the orientation of the headset to determine predetermined guidance and to improve accuracy of implants (Detailed Description, Lang, [0074-0120]). The fact that Lang and Ha disclose similar systems of AR to provide remote expert guidance, makes this combination more easily implemented. Regarding claim 12, Ha in combination with Lang discloses every element of claim 11, and Ha further discloses wherein converting the 3D coordinates comprises passing the 3D coordinates through an inversion matrix (See Fig 4, reconstruction by applying inverse matrix). Thus, it would have remained obvious to have combined Ha and Lang in the manner of claim 11. Regarding claim 31, Ha discloses every element of claim 21 but does not explicitly disclose wherein the processor is configured to determine a field depth of an image captured by a camera of the AR headset by controlling a first light of the light emitting device to emit light of a first frequency to illuminate a first location on a target in front of the AR headset, control a second light of the light emitting device to emit light of a second frequency to illuminate a second location on a target in front of the AR headset, receive reflections of the first light and second lights and analyse the reflections to determine a field depth of the image. Lang, in a similar field of endeavor, discloses an AR headset wherein the processor is configured to determine a field depth of an image captured by a camera of the AR headset by controlling a first light of the light emitting device to emit light of a first frequency to illuminate a first location on a target in front of the AR headset, control a second light of the light emitting device to emit light of a second frequency to illuminate a second location on a target in front of the AR headset, receive reflections of the first light and second lights and analyse the reflections to determine a field depth of the image (See Summary, [0005-0018], “In some embodiments, the system comprises one or more markers. In some embodiments, the marker is configured to reflect or emit light with a wavelength between 380 nm and 700 nm. In some embodiments, the marker is configured to reflect or emit light with a wavelength greater than 700 nm. In some embodiments, the marker is a radiofrequency marker, or wherein the marker is an optical marker, wherein the optical marker includes a geometric pattern…. ] In some embodiments, the one or more cameras detect light with a wavelength between 380 nm and 700 nm. In some embodiments, the one or more cameras detect light with a wavelength above 700 nm.”). It would have been obvious to one of ordinary skill in the art to have modified the determination of field depth of Ha in such a way to incorporate the teachings of Lang to provide wherein the processor is configured to determine a field depth of an image captured by a camera of the AR headset by controlling a first light of the light emitting device to emit light of a first frequency to illuminate a first location on a target in front of the AR headset, control a second light of the light emitting device to emit light of a second frequency to illuminate a second location on a target in front of the AR headset, receive reflections of the first light and second lights and analyse the reflections to determine a field depth of the image. The motivation is to use multiple optical-based markers to increase coordinate accuracy of the AR head mounted display (See Lang, Summary, [0004-0018], “…wherein the processor is configured to determine a distance between the one or more predetermined coordinates of the virtual surgical guide and the see through optical head mounted display, wherein the one or more predetermined coordinates of the virtual surgical guide are referenced to or based on the marker, wherein the processor is configured to adjust at least one focal plane, focal point, or combination thereof of the display of the 3D stereoscopic view based on the determined distance.”). The fact that Lang and Ha disclose similar systems of AR to provide remote expert guidance, makes this combination more easily implemented. Regarding claim 33, Ha discloses every element of claim 21 but does not explicitly disclose the AR headset further comprising an optics device including a light guide and reflector for redirecting light from in front of and beneath the augmented reality headset into a camera of the augmented reality headset; wherein the optics device is attached to or integral with the AR headset. However, Ha does provide the suggestion of optical components with the AR headset (Detailed Description, [0140-0144], “Also, to improve a command recognition rate of the recognition camera 145, artificial eyebrows for instructions may be attached to the eyebrows of the user. The artificial eyebrows for instructions may be coated with reflective paint that reflects infrared light in a predetermined range, and the recognition camera 145 may be configured to recognize the infrared light, thereby improving the command recognition rate.”). Lang, in a similar field of endeavor, discloses the AR headset further comprising: an optics device including a light guide and reflector for redirecting light from in front of and beneath the augmented reality headset into a camera of the augmented reality headset (Lang, Detailed Description, [0130], “n some embodiments, a pair of glasses is utilized. The glasses can include an optical head-mounted display. An optical head-mounted display (OHMD) can be a wearable display that has the capability of reflecting projected images as well as allowing the user to see through it. Various types of OHMDs known in the art can be used in order to practice embodiments of the present disclosure. These include curved mirror or curved combiner OHMDs as well as wave-guide or light-guide OHMDs. The OHMDs can optionally utilize diffraction optics, holographic optics, polarized optics, and reflective optics.”); wherein the optics device is attached to or integral with the AR headset (summary, Detailed Description,[0130]). It would have been obvious to one of ordinary skill in the art to have modified the AR headset of Ha in such a way to incorporate the teachings of Lang’s optics device including a light guide and reflector for redirecting light from in front of and beneath the augmented reality headset into a camera of the augmented reality headset; wherein the optics device is attached to or integral with the AR headset. The motivation to combine these arts is to utilize a known technique in the art to improve similar devices (i.e. AR devices) in the same ways (See Lang, Detailed Description, [0130], showing well-known OHMDs utilizing light guides and reflectors). The fact that Lang and Ha disclose similar systems of AR to solve the same problem of providing remote expert guidance, makes this combination more easily implemented. Other References The following references are also cited as pertinent but may not be specifically relied upon within this Action: Neeter (US 2020/0005538 A1) Miyasaka et al. (US 2016/0188277 A1) Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to KWIN XIE whose telephone number is (571)272-7812. The examiner can normally be reached 9:00 AM - 5:00 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Temesghen Ghebretinsae can be reached at (571)272-3017. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /KWIN XIE/Primary Examiner, Art Unit 2626
Read full office action

Prosecution Timeline

Mar 06, 2025
Application Filed
Feb 14, 2026
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602132
DISPLAY DEVICE
2y 5m to grant Granted Apr 14, 2026
Patent 12578813
Touch Display Substrate, Manufacturing Method Therefor, and Touch Display Device
2y 5m to grant Granted Mar 17, 2026
Patent 12578822
TOUCH COORDINATE EDGE CORRECTION
2y 5m to grant Granted Mar 17, 2026
Patent 12566469
WEARABLE ELECTRONIC DEVICE COMPRISING SENSOR, AND METHOD BY WHICH ELECTRONIC DEVICE PROCESSES TOUCH SIGNAL
2y 5m to grant Granted Mar 03, 2026
Patent 12561003
HAPTIC FEEDBACK HEADPIECE
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
64%
Grant Probability
96%
With Interview (+32.1%)
2y 7m
Median Time to Grant
Low
PTA Risk
Based on 435 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month