Prosecution Insights
Last updated: April 19, 2026
Application No. 18/490,766

METHOD AND SYSTEM FOR SHARING DIGITAL PATHOLOGICAL IMAGE

Non-Final OA §102§103
Filed
Oct 20, 2023
Examiner
YI, RINNA
Art Unit
2179
Tech Center
2100 — Computer Architecture & Software
Assignee
Wistron Medical Technology Corporation
OA Round
1 (Non-Final)
73%
Grant Probability
Favorable
1-2
OA Rounds
3y 3m
To Grant
99%
With Interview

Examiner Intelligence

Grants 73% — above average
73%
Career Allow Rate
325 granted / 444 resolved
+18.2% vs TC avg
Strong +49% interview lift
Without
With
+49.4%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
19 currently pending
Career history
463
Total Applications
across all art units

Statute-Specific Performance

§101
7.3%
-32.7% vs TC avg
§103
57.9%
+17.9% vs TC avg
§102
21.9%
-18.1% vs TC avg
§112
8.6%
-31.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 444 resolved cases

Office Action

§102 §103
DETAILED ACTION 1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. 2. Claims 1 and 11 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Joo et al. (US 2014/0164948 A1). As in Claim 1, Joo teaches a method for sharing a digital pathological image (FIG. 1, pars. 32-33, a method for sharing a medical image), the method comprising: providing a remote interactive service to a first terminal device and a second terminal device through a server (FIG. 1, pars. 32, 34, 62-63, a messenger-based medical image sharing system provides a remote collaborative diagnosis to a first a first viewer 110 and a second viewer 120 via a medical image server 130), wherein the first terminal device comprises a first interactive interface, the second terminal device comprises a second interactive interface (see FIGS. 3-4, see the first viewer 110 and the second viewer 120), the first terminal device has an editing permission, and the second terminal device has an observing permission (pars. 46-48, 49-52, 56, a first user of the first viewer 110 has an editing permission or control authorities, and a second user of the second viewer has observing permission, which can be set by the first user; further see pars. 105 and 107-108); presenting the digital pathological image to the first interactive interface and the second interactive interface through the server (FIG. 1 and 3-4, pars. 32, 34, 62-63); in response to receiving an image adjustment operation performed on the digital pathological image through the first interactive interface, transmitting image adjustment information corresponding to the image adjustment operation to the second interactive interface of the second terminal device through the first interactive interface (pars. 47-51, the first user of the first viewer 110 can perform adjusting or editing operations on a sharing-target medical image – such as rotating, zooming, changing the field of view, or applying windowing—and these adjustments can be synchronized so that the second viewer 120 sees the same updated image in real time; further see pars. 36, 63-64, 79); in response to receiving a tag adding operation performed on the digital pathological image through the first interactive interface, transmitting added tag information corresponding to the tag adding operation to the second interactive interface of the second terminal device through the first interactive interface (pars. 46, 54, 57-59, 63, 66-67, the first user of the first viewer 110 can perform annotation operations on the sharing-target medical image, and these annotations can be synchronized and transmitted to the second viewer 120, which sees the same updated image; further see pars. 36, 63-64, 79); in the second terminal device, in response to receiving the image adjustment information through the second interactive interface, adjusting the digital pathological image presented on the second interactive interface based on the image adjustment information (pars. 47-49, 62, the second user of the second viewer 120 can perform adjusting or editing operations on the sharing-target medical image (e.g., rotating, zooming, changing the field of view, etc.), and these adjustments can be synchronized so that the first viewer 120 sees the same updated image; further see pars. 36, 63-64, 79); and in the second terminal device, in response to receiving the added tag information through the second interactive interface, adding a tag corresponding to the tag adding operation performed through the first interactive interface to the digital pathological image presented on the second interactive interface based on the added tag information (pars. 47-49, 62, the second user of the second viewer 120 can perform annotation operations on the sharing-target medical image, and these annotations can be synchronized and transmitted to the first viewer 110, which sees the same updated image; further see pars. 36, 63-64, 79). As in Claim 11, Joo teaches a system for sharing a digital pathological image (FIG. 1, pars. 30-32), comprising: a server, providing a remote interactive service (FIG. 1, pars. 32, 34, 62-63, a messenger-based medical image sharing system provides a remote collaborative diagnosis to a first a first viewer 110 and a second viewer 120 via a medical image server 130 ); a first terminal device, having an editing permission and comprising a first interactive interface provided by the remote interactive service (FIGS. 1 and 3-4, pars. 46-48, 49-52, 56, a first user of the first viewer 110 has an editing permission or control authorities; further see pars. 105 and 107-108); and a second terminal device, having an observing permission and comprising a second interactive interface provided by the remote interactive service, wherein the server presents the digital pathological image to the first interactive interface and the second interactive interface (Figs. 1 and 3-4, pars. 46-48, 49-52, 56, a second user of the second viewer has observing permission, which can be set by the first user). Claim 11 is substantially similar to Claim 1 and rejected under the same rationale. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 3. Claims 2-3 and 12-13 are rejected under 35 U.S.C. 103 as being unpatentable over Joo et al. (US 2014/0164948 A1) in view of Hasegawa et al. (US 2020/0379633 A1). As in Claim 2, Joo teaches all the limitations of Claim 1. Joo does not teach performing a window dividing operation through the first interactive interface and obtaining a plurality of divided windows; performing an image switching operation on each of the divided windows through the first interactive interface and presenting the corresponding digital pathological images in each of the divided windows; based on the window dividing operation and the image switching operation, recording window dividing information through the first interactive interface, wherein the window dividing information comprises the number of the divided windows, a window identification code of each of the divided windows, and image identification codes of the digital pathological images presented in the divided windows; transmitting the window dividing information to the second interactive interface of the second terminal device through the first interactive interface; and in the second terminal device, in response to receiving the window dividing information through the second interactive interface, performing the window dividing operation through the second interactive interface based on the window dividing information to obtain the divided windows, and presenting the corresponding digital pathological images in each of the divided windows. However, in the same field of the invention, Hasegawa teaches performing a window dividing operation through the first interactive interface and obtaining a plurality of divided windows (pars. 48-49, 125, a window can be divided into a plurality of windows or tiles (e.g., multi-window or three or more windows/viewers)); performing an image switching operation on each of the divided windows through the first interactive interface and presenting the corresponding digital pathological images in each of the divided windows (see at least FIGS. 11-12 and 20 with the accompanying paragraphs, with the user input, the user can select or input instructions on one of the windows for various request); based on the window dividing operation and the image switching operation, recording window dividing information through the first interactive interface, wherein the window dividing information comprises the number of the divided windows, a window identification code of each of the divided windows, and image identification codes of the digital pathological images presented in the divided windows (see at least pars. 57-59, 63-66, 75, each view window (single or multi-window) is registered or recorded in a synchronous control table via the presenter window/viewer. Each record includes information such as viewer ID, Window ID, Operation Right Flag, the number of windows, and the like); transmitting the window dividing information to the second interactive interface of the second terminal device through the first interactive interface (see at least 9 and 20 with the accompanying paragraphs, the information can be transmitted to the audience device in the synchronous mode; see pars. 124-127,129-141, 154, 174-177 ); and in the second terminal device, in response to receiving the window dividing information through the second interactive interface, performing the window dividing operation through the second interactive interface based on the window dividing information to obtain the divided windows, and presenting the corresponding digital pathological images in each of the divided windows (see at least FIGS. 9 and 20 with the accompanying paragraphs, the audience window can present the divided windows (e.g., multi-window in this instance) to perform synchronous processing on the image on the audience window ; see pars. 124-127,129-141, 154, 174-177). Therefore, before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to modify the system and method for collaboratively adjusting or annotating on the sharing-target medical image, as taught by Joo, and to incorporate the method for performing the window dividing operation in the synchronous mode between the presenter and audience windows, as taught by Hasegawa. The motivation is to ensure that all users can see and interact with the same information in real time, even when the screen is divided in to multiple sub-areas, enabling accurate collaboration. As in Claim 3, Joo-Hasegawa teaches all the limitations of Claim 2. Joo-Hasegawa further teaches that after performing the window dividing operation through the first interactive interface and obtaining the divided windows (see the rejection of claim 2), the method further comprising: in a selected window of the divided windows on the first interactive interface, in response to receiving the image adjustment operation performed on the digital pathological image presented in the selected window through the first interactive interface, transmitting the image adjustment information corresponding to the image adjustment operation performed on the selected window to the second interactive interface of the second terminal device through the first interactive interface (Hasegawa, see the rejection of claim 2. Further see pars. 10, 51, 82, the image adjustment operations, such as moves, zoom factor changes, rotations, and the like); and in response to receiving the image adjustment information through the second interactive interface, based on the image adjustment information, adjusting the digital pathological image presented in the selected window corresponding to the second interactive interface through the second interactive interface (Hasegawa, see the rejection of claim 2. Further see pars. 10, 51, 82, the image adjustment operations, such as moves, zoom factor changes, rotations, and the like). Claim 12 is substantially similar to Claim 2 and rejected under the same rationale. Claim 13 is substantially similar to Claim 3 and rejected under the same rationale. 4. Claims 4 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Joo et al. (US 2014/0164948 A1) in view of Hasegawa et al. (US 2020/0379633 A1) and further in view of Deroo et al. (US 2018/0357982 A1). As in Claim 4, Joo-Hasegawa teaches all the limitations of Claim 2. Joo-Hasegawa does not teach: in a selected window of the divided windows on the first interactive interface, in response to the tag adding operation performed on the digital pathological image presented in the selected window through the first interactive interface, recording the added tag information through the first interactive interface, and transmitting the added tag information to the second interactive interface of the second terminal device through the first interactive interface. However, in the same filed of the invention, Deroo teaches: in a selected window of the divided windows on the first interactive interface, in response to the tag adding operation performed on the digital pathological image presented in the selected window through the first interactive interface, recording the added tag information through the first interactive interface, and transmitting the added tag information to the second interactive interface of the second terminal device through the first interactive interface (pars. 30-31, 49, 131, annotations can be placed on any sub-areas and remain synchronized between a host display and an associated display; further see pars. 90-92, 97-98, 100 ). Therefore, before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to modify the system and method for collaboratively adjusting or annotating on the sharing-target medical image, as taught by Joo, in view of Hasegawa’s teachings, and to incorporate the method for performing annotation operation in the synchronous mode between the host and the associated devices, as taught by Deroo. The motivation is to ensure that all users can see and interact with the same information in real time, even when the screen is divided in to multiple sub-areas, enabling accurate collaboration. Claim 14 is substantially similar to Claim 4 and rejected under the same rationale. 5. Claims 5 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Joo et al. (US 2014/0164948 A1) in view of Hasegawa et al. (US 2020/0379633 A1) and further in view of Walker et al. (US 2002/0065848 A1). As in Claim 5, Joo-Hasegawa teaches all the limitations of Claim 2. Joo-Hasegawa does not teach: in the first terminal device, after enabling a lock function through the first interactive interface, in response to the image adjustment operation performed on at least one of the divided windows through the first interactive interface, synchronously adjusting the digital pathological images presented in the other divided windows of the divided windows, and recording and transmitting the image adjustment information to the second interactive interface of the second terminal device through the first interactive interface, so as to synchronously adjust the corresponding digital pathological images presented on the second interactive interface through the second interactive interface based on the image adjustment information. However, in the same filed of the invention, Walker teaches: in the first terminal device, after enabling a lock function through the first interactive interface, in response to the image adjustment operation performed on at least one of the divided windows through the first interactive interface, synchronously adjusting the digital pathological images presented in the other divided windows of the divided windows, and recording and transmitting the image adjustment information to the second interactive interface of the second terminal device through the first interactive interface, so as to synchronously adjust the corresponding digital pathological images presented on the second interactive interface through the second interactive interface based on the image adjustment information (at least pars. 4-7, 15-18, 60-62, 201-202, the system can break documents or images into independent editable section that can be locked without affecting any other section, The user can lock a desired section of the document or image while other users (or other users devices) can edit different sections). Therefore, before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to modify the system and method for collaboratively adjusting or annotating on the sharing-target medical image, as taught by Joo, in view of Hasegawa’s teachings, and to incorporate the method for locking independent section of the document/image in collaborative environments, as taught by Walker. The motivation is to support multi-user, real-time collaboration across diverse document including images without conflicts or unnecessary blocking using locking function. Claim 15 is substantially similar to Claim 5 and rejected under the same rationale. 6. Claims 6 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Joo et al. (US 2014/0164948 A1) in view of Osaki et al. (US 6675352 B1) and further in view of Goede, Patricia (US 2016/0253456 A1). As in Claim 6, Joo teaches all the limitations of Claim 1. Joo teaches: after transmitting the added tag information corresponding to the tag adding operation to the second interactive interface of the second terminal device through the first interactive interface (pars. 46, 54, 57-59, 63, 66-67, the first user of the first viewer 110 can perform annotation operations on the sharing-target medical image, and these annotations can be synchronized and transmitted to the second viewer 120, which sees the same updated image; further see pars. 36, 63-64, 79). Joo does not appear to explicitly teach that in response to a tag deleting operation performed on the digital pathological image through the first interactive interface, transmitting deleted tag information corresponding to the tag deleting operation to the second interactive interface of the second terminal device through the first interactive interface, wherein the deleted tag information comprises at least one of a tag identification code, a tag title, a tag descriptor, a tag length, a tag author, tag coordinate information, and a tag time, wherein in the second terminal device, in response to receiving the deleted tag information through the second interactive interface, removing the corresponding tag from the digital pathological image presented on the second interactive interface through the second interactive interface based on the deleted tag information. However, in the same field of the invention, Osaki teaches that in response to a tag deleting operation performed on the digital pathological image through the first interactive interface, transmitting deleted tag information corresponding to the tag deleting operation to the second interactive interface of the second terminal device through the first interactive interface, wherein in the second terminal device, in response to receiving the deleted tag information through the second interactive interface, removing the corresponding tag from the digital pathological image presented on the second interactive interface through the second interactive interface based on the deleted tag information (col. 1, lines 9-29, col. 8, line 17 – col. 9, lines 39, col. 11, liens 32-62, col. 12, liens 30-39, the system records user annotations (lines, text, pointers) directly n images or videos along with voice and execution timings, storing them as sequential data block. Each data block contains an initial state of the screen and commands for annotations, allowing precise playback. When an annotation on first device is deleted, the system removes the corresponding intermediate data block, and the modified data block are then transmitted to the other device. As a result, the deleted annotation no longer appears on the other device, while all other annotations and commons remain synchronized and visually consistent, ensuring that playback across devices accurately reflects the edited session; further see col. 4, lines 27-40). Therefore, before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to modify the system and method for collaboratively adjusting or annotating on the sharing-target medical image, as taught by Joo, and to delete the annotations in the synchronized environment, as taught by Osaki. The reason is to ensure that when one user deletes an annotation on their system, the change is immediately reflected on other collaborator’s system, keeping everyone perfectly synchronized. Joo and Osaki do not teach that the deleted tag information comprises at least one of a tag identification code, a tag title, a tag descriptor, a tag length, a tag author, tag coordinate information, and a tag time. However, in the same filed of the invention, Goede teaches that the deleted tag information comprises at least one of a tag identification code, a tag title, a tag descriptor, a tag length, a tag author, tag coordinate information, and a tag time (par. 35, annotations can include metadata such as the author’s name and the date and time when the annotation were performed). Therefore, before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to modify the system and method for collaboratively adjusting or annotating on the sharing-target medical image, as taught by Joo, in view of Osaki’s teachings, and to delete the annotations in the synchronized environment, as taught by Osaki. The reason is to ensure that when one user deletes an annotation on their system, the change is immediately reflected on other collaborator’s system, keeping everyone perfectly synchronized. Claim 16 is substantially similar to Claim 6 and rejected under the same rationale. 7. Claims 7 and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Joo et al. (US 2014/0164948 A1) in view of Backhaus, Brent (US 2011/0015941 A1). As in Claim 7, Joo teaches all the limitations of Claim 1. Joo does not teach that the digital pathological image is provided by the first terminal device or the second terminal device, and an image format of the digital pathological image is converted into a standard format through the first interactive interface or the second interactive interface adopted by the first terminal device or the second terminal device providing the digital pathological image. However, in the same field of the invention, Backhaus teaches that the digital pathological image is provided by the first terminal device or the second terminal device, and an image format of the digital pathological image is converted into a standard format through the first interactive interface or the second interactive interface adopted by the first terminal device or the second terminal device providing the digital pathological image (pars. 26-28, 40, 53, 112, the image provided by the first device (e.g., image order (IO) management system 102 that can be converted into the appropriate formats, including DICOM format). Therefore, before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to modify the system and method for collaboratively adjusting or annotating on the sharing-target medical image, as taught by Joo, and to include information, such as the author’s name and the date and time, in the annotations, as taught by Backhaus. The motivation is to give context and traceability, so collaborators know who made an annotation and when, which improves clarity in shared work. Claim 17 is substantially similar to Claim 7 and rejected under the same rationale. 8. Claims 8 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Joo et al. (US 2014/0164948 A1) in view of Saikou, Masahiro (US 2023/0230364 A1). As in Claim 8, Joo teaches all the limitations of Claim 1. Joo does note teach providing an artificial intelligence module through the server; receiving the digital pathological image through the artificial intelligence module to identify a nidus location in the digital pathological image by the artificial intelligence module; in response to the tag adding operation performed on the digital pathological image through the first interactive interface and in response to a location where the tag adding operation is performed falling within the nidus location, providing nidus information associated with the nidus location as the added tag information through the first interactive interface. However, in the same field of the invention, Saikou teaches providing an artificial intelligence module through the server (pars. 41-42, 45, 95, machine learning models is provided by the server-side image-processing device 1); receiving the digital pathological image through the artificial intelligence module to identify a nidus location in the digital pathological image by the artificial intelligence module (FIGS. 5-9, pars. 58-80, the ML models (e.g., detection model) identify or detect lesion candidate areas on the target or captured image); in response to the tag adding operation performed on the digital pathological image through the first interactive interface and in response to a location where the tag adding operation is performed falling within the nidus location, providing nidus information associated with the nidus location as the added tag information through the first interactive interface (FIGS. 5-9, pars. 58-80, the ML models generate annotations (e.g., line, shape, rectangles, etc.) on the top of the lesion candidate areas, and the system provides information associated with each annotation (i.e., candidate area) on a display device 2). Therefore, before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to modify the system and method for collaboratively adjusting or annotating on the sharing-target medical image, as taught by Joo, and to display annotations on the detected candidate areas and display information for the annotations using machine-learning models, as taught by Saikou. The motivation is to accurately highlight critical areas in medical images, improving diagnostic efficiency, consistency, and clinician support. Claim 18 is substantially similar to Claim 8 and rejected under the same rationale. 9. Claims 9 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Joo et al. (US 2014/0164948 A1) in view of Rance et al. (US 2010/0058410 A1). As in Claim 9, Joo teaches all the limitations of Claim 1. Joo does not teach: in response to the first terminal device having the editing permission, enabling a poll function, establishing a poll menu and sending the poll menu to the server by the first terminal device, wherein the poll menu comprises a plurality of options; after displaying the poll menu in the first terminal device and the second terminal device through the server, sending poll information to the server by the first terminal device and the second terminal device according to the poll menu, wherein the poll information comprises a poll number, a poll time, a poll title, and a poll answer; receiving the poll information through the server, counting the poll information to obtain a poll result, and displaying the poll result through the first terminal device or the second terminal device. However, in the same field of the invention, Rance teaches in response to the first terminal device having the editing permission, enabling a poll function, establishing a poll menu and sending the poll menu to the server by the first terminal device, wherein the poll menu comprises a plurality of options (at least pars. 30, 58-59, 76-79, a presenter or channel owner can creates a poll including a plurality of questions that can be send a server (e.g., a management server 110); after displaying the poll menu in the first terminal device and the second terminal device through the server, sending poll information to the server by the first terminal device and the second terminal device according to the poll menu, wherein the poll information comprises a poll number, a poll time, a poll title, and a poll answer (at least pars. 30, 41, 61-62, 80, 76-79, 100-102, the poll can be presented to the presenter and other users, and the server collets information (e.g., answers or feedback) from the users (i.e., the presenter and other users)); receiving the poll information through the server, counting the poll information to obtain a poll result, and displaying the poll result through the first terminal device or the second terminal device (at least pars. 30, 41, 61-62, 80, 76-79, 100-102, the poll results can be pressed to the users (i.e., the presenter and other users)). Therefore, before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to modify the system and method for collaboratively adjusting or annotating on the sharing-target medical image, as taught by Joo, and to inculpate the way to present the poll and display the poll results, as taught by Rance. The motivation is to quickly gather opinions, answers, or detailed feedback from participants or audience to guide or inform decisions or actions Claim 19 is substantially similar to Claim 9 and rejected under the same rationale. 10. Claims 10 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Joo et al. (US 2014/0164948 A1) in view of Goede, Patricia (US 2016/0253456 A1). As in Claim 10, Joo teaches all the limitations of Claim 1. Joo further teaches that the image adjustment information comprises at least one of a coordinate location of a middle point of the digital pathological image relative to a display frame of the first interactive interface, a zoom ratio for zooming the digital pathological image, and a rotation angle for rotating the digital pathological image (pars. 47, 79, angle of the rotation). Joo does not appear to explicitly teach that the added tag information comprises at least one of a tag identification code, a tag title, a tag descriptor, a tag length, a tag author, tag coordinate information, and a tag time. However, in the same filed of the invention, Goede teaches that the added tag information comprises at least one of a tag identification code, a tag title, a tag descriptor, a tag length, a tag author, tag coordinate information, and a tag time (par. 35, annotations can include metadata such as the author’s name and the date and time when the annotation were performed). Therefore, before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to modify the system and method for collaboratively adjusting or annotating on the sharing-target medical image, as taught by Joo, and to include information, such as the author’s name and the date and time, in the annotations, as taught by Goede. The motivation is to give context and traceability, so collaborators know who made an annotation and when, which improves clarity in shared work. Claim 20 is substantially similar to Claim 10 and rejected under the same rationale. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to Rinna Yi whose telephone number is (571) 270-7752 and fax number is (571) 270-8752. The examiner can normally be reached on M-F 8:30am-5:00pm. If attempts to reach the examiner by telephone are unsuccessful, the examiner's supervisor, Fred Ehichioya can be reached on (571) 272-4034. Information regarding the status of an application may be obtained from Patent Center and the Private Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from Patent Center or Private PAIR. Status information for unpublished applications is available through Patent Center or Private PAIR to authorized users only. Should you have questions about access to Patent Center or the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) Form at https://www.uspto.gov/patents/uspto-automated- interview-request-air-form. /RINNA YI/ Primary Examiner, Art Unit 2179
Read full office action

Prosecution Timeline

Oct 20, 2023
Application Filed
Dec 11, 2025
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602149
DISPLAY CONTROL BASED ON DIRECTIONAL VIDEO FLOW ANGLE
2y 5m to grant Granted Apr 14, 2026
Patent 12602151
MOBILE ELECTRONIC DEVICE AND OPERATION INTERFACE ADJUSTMENT METHOD THEREOF BASED ON HANDEDNESS STATUS AND FREQUENCY OF USE
2y 5m to grant Granted Apr 14, 2026
Patent 12587732
VIEWING ANGLE ADJUSTMENT METHOD AND DEVICE, STORAGE MEDIUM, AND ELECTRONIC DEVICE
2y 5m to grant Granted Mar 24, 2026
Patent 12561006
DISPLAY APPARATUS FOR GESTURE RECOGNITION AND OPERATING METHOD THEREOF
2y 5m to grant Granted Feb 24, 2026
Patent 12548654
PREVENTING INADVERTENT CHANGES IN AMBULATORY MEDICAL DEVICES
2y 5m to grant Granted Feb 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
73%
Grant Probability
99%
With Interview (+49.4%)
3y 3m
Median Time to Grant
Low
PTA Risk
Based on 444 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month