Prosecution Insights
Last updated: April 19, 2026
Application No. 17/953,388

DISPLAY METHOD, DISPLAY SYSTEM, AND A NON-TRANSITORY COMPUTER-READABLE STORAGE MEDIUM STORING A PROGRAM

Final Rejection §101§102§103
Filed
Sep 27, 2022
Examiner
MEHMOOD, JENNIFER
Art Unit
2664
Tech Center
2600 — Communications
Assignee
Seiko Epson Corporation
OA Round
3 (Final)
65%
Grant Probability
Moderate
4-5
OA Rounds
3y 1m
To Grant
95%
With Interview

Examiner Intelligence

Grants 65% of resolved cases
65%
Career Allow Rate
160 granted / 247 resolved
+2.8% vs TC avg
Strong +31% interview lift
Without
With
+30.6%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
21 currently pending
Career history
268
Total Applications
across all art units

Statute-Specific Performance

§101
2.8%
-37.2% vs TC avg
§103
45.0%
+5.0% vs TC avg
§102
31.9%
-8.1% vs TC avg
§112
17.6%
-22.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 247 resolved cases

Office Action

§101 §102 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant’s response to the last Office Action, filed October 22, 2005, has been entered and made of record. Applicant has amended claims 1-9. Claims 1-9 are currently pending. U.S.C. § 112(b) The Applicant’s amendments as well as the remarks on page 5, regarding the 35 U.S.C. § 112(b) rejection of claim 1 have been fully considered and are persuasive. Therefore, the 35 U.S.C. § 112(b) rejection of claim 1 has been withdrawn. U.S.C. § 101 Regarding the rejection of claim 8: The Applicant’s amendments as well as the remarks on page 6 and 7, have been fully considered and are persuasive. Therefore, the 35 U.S.C. § 101 rejection of claim 8 has been withdrawn. Regarding the rejection of claims 1, 3, 5, and 6: The Applicant’s amendments as well as the remarks on page 6 and 7, have been fully considered and are not persuasive. Therefore, the 35 U.S.C. § 101 rejection of these claims is maintained. The recommendation to overcome this rejection, consider, as claims 2, 4, 7 and 8 are written: Specifying the technical problem solved (e.g., how marker re-reading improves accuracy or efficiency in procedure guidance) Describing the specific technical features of the projector that enable the solution Explaining how the combination of marker reading and projection achieves a non-conventional result Detailing the specific technical improvements in the procedure workflow. Claim Rejections - 35 USC § 102 Applicant’s arguments, see pages 7-11 of the Applicant’s remarks with respect to the rejections of claims 1-9 have been fully considered and are withdrawn considering the Applicant’s amendments. However, upon further consideration, a new ground(s) of rejection is made using the references of Shin et al. (US 2023/0014774 A1) in view of Wyper et al. (US 2020/0302417 A1). Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. The claimed invention is directed to non-statutory subject matter. Claims 1, 3, 5 and 6 do not fall within at least one of the four categories of patent eligible subject matter because the claims are directed to an abstract idea (as well as a judicial exception without reciting significantly more) of data retrieval, conditional logic, and sequential display without sufficient integration into a specific technological solution. Under Alice Corp. v. CLS Bank and Mayo v. Prometheus, the claim: Recites generic steps applicable to any data retrieval and display system Does not demonstrate how the projector’s specific technical capabilities solve a technical problem Lacks specificity regarding the marker technology or identification mechanism Describes conventional computer functions (read, store, compare, display) applied to a generic projector Does not show that the projector itself is modified or operates in a non-conventional manner Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-9 are rejected under 35 U.S.C. 103 as being unpatentable over Shin et al. (US 2023/0014774 A1) in view of Wyper et al. (US 2020/0302417 A1). Regarding claim 1, Shin discloses a display method comprising: acquiring, by a projector, identification information from a marker image (can surrounding images, fig. 8 be considered a “marker”? 820-860, identifiying objects as markers; parag 0094) in which the identification information is recorded, the identification information associated with a plurality of procedures; projecting, by the projector (parag 0076, “the augmented reality guide 520 may be represented as a virtual object through the external electronic device”), a projection image concerning a first procedure of the plurality of procedures “Slice the potato”, item 522” corresponding to the identification information; acquiring by the projector, the identification information from the marker again after projecting the projection image concerning the first procedure; and after acquiring the identification information again, projecting, by the projector, a projection image concerning a second procedure (After slice the potato, chop the spinach, Fig. 5C, item 532) of the plurality of procedures performed later than the first procedure, the second procedure (or subsequent procedures include cutting instructions as depicted in Figures 7A and 7B) corresponding to the identification information. While Shin teaches identifying information in an environment by obtaining images associated with a plurality of procedures, Shin does not disclose that the identification information originates from a marker that contains ID information. However, Wyper teaches identification information from a marker (parag 0035; QR code scanner) associated with a plurality of procedures: PNG media_image1.png 702 800 media_image1.png Greyscale PNG media_image2.png 641 921 media_image2.png Greyscale Therefore, it would have been obvious to one having ordinary skill in the art, before the effective filing date of the claimed invention, to cut down on extraneous equipment used such as eyeglasses and instead use a commonly known, convenient and accessible device such as a cell phone to receive procedural instructions through the use of a marker/QR code (parag 0024). Regarding claim 2, Shin discloses acquiring the identification information again includes acquiring, by the projector, the identification information after a time period after the identification information is acquired (parag 0073, tasks 1-5; “which the first user's operation is stored in the glasses-type wearable device 100 (e.g., the memory 140) separately for each of at least one task may be performed simultaneously with or after operation 420. According to an embodiment of the disclosure, operation 420 may be performed according to a predesignated time period or the user's request”. Regarding claim 3, Shin discloses acquiring the identification information again includes acquiring, by the projector, the identification information after a time period after the projection image concerning the first procedure is displayed (parag 0073, tasks completed by either first/second user are determined in a particular order and upon completion of one task to determine moving on to a subsequest task (Fig. 4). Regarding claim 4, Shin discloses acquiring the identification information again includes acquiring, by the projector, the identification information after a period that is started after the identification information is acquired and in which the identification information is not acquired (last sentence of paragraph 0073 operation 420 may be performed according to a predesignated time period or the user's request; 0097, 0098). Identifying a knife being on (11A) or off the table (11B – identification information not acquired). Regarding claim 5, Shin discloses acquiring the identification information includes acquiring, by the projector, the identification information from the marker located in a first position, and acquiring the identification information again includes acquiring, by the projector, the identification information from the marker located in a second position different from the first position and, thereafter, acquiring, by the projector, the identification information the second user's glasses-type wearable device 310 may identify whether the second object is positioned inside or around the second external electronic device. For example, when the second external electronic device is a refrigerator 312, the refrigerator 312 may have information about food stored in the refrigerator 312 and, if the object selected by the first user is spinach, it may transmit information about whether spinach is present in the refrigerator 312 (e.g., object position information); According to an embodiment of the disclosure, when the image transmitted from the refrigerator 312 does not include the specific object (e.g., kitchen knife), the second user's glasses-type wearable device 310 may transmit an image transmission request to another second external electronic device. According to an embodiment of the disclosure, the second user's glasses-type wearable device 310 may request the second external electronic device to determine whether there is the specific object. In this case, according to an embodiment of the disclosure, the second user's glasses-type wearable device 310 may transmit information about the specific object to the second external electronic device (e.g., the smartphone 314)”. Paragraphs 0097 and 0098 also describe figures 11A and 11B of knife being on the table or non-existant. The reasons for combining Shin and Wyper, using the identificaiton information from a marker, are the same as explained in the rejection of claim 1. Regarding claim 6, Shin discloses receiving a first input (completion of for instructing to acquire the identification information again, wherein acquiring the identification information again includes acquiring, by the projector, the identification information when the first input is received parag (0073 According to an embodiment of the disclosure, the reference image may be separately stored for each of at least one task constituting one operation. Further, it may be stored in the glasses-type wearable device 100 (e.g., the memory 140) separately for each of at least one task according to an embodiment of the disclosure. For example, when the first user performs one operation of pouring water to a pot (task 1), placing it on induction cooktop (task 2), washing a potato (task 3), slicing the potato (task 4), and frying the sliced potato (task 5),” information (e.g., the obtained image) about each task may be stored according to the order of elapsed times, separately for each task of the operation. For example, when an image in which the user washes the potato (e.g., when task 3 is identified), images for task 1 and task 2 may be stored together with time information in the glasses-type wearable device 100 (e.g., the memory 140). According to an embodiment of the disclosure, a reference as to how to distinguish the tasks may have been previously determined. For example, when the operation is “cooking,” the task of pouring water into a pot followed by placing it on induction cooktop by the user may be pre-designated to be designated as one task and stored. The function or operation of distinguishing the task may be learned by an artificial intelligence model. Such learning may be performed, e.g., by the glasses-type wearable device 100 itself or through a separate server (e.g., the server 320). Regarding claim 7, Shin discloses receiving, by the projector, a second input for instructing to project the projection image concerning the first procedure after projecting the projection image concerning the second procedure; and projecting, by the projector, the projection image concerning the first procedure when the second input is received (Fig. 8, parag 0086; For example, when the reference image showing that the potato is sliced and the second surrounding image correspond to each other (e.g., when a result of analysis of motion in the image is quantitatively included in an error range), the server 320 may determine that the second user is current slicing potato. According to an embodiment of the disclosure, in operation 860, the server 320 may identify the difference in progress status. According to an embodiment of the disclosure, e.g., when it is determined that user 1 is currently frying the sliced potato (e.g., when it is determined that task 5 is being performed), and user 2 is slicing the potato (e.g., when it is determined that task 4 is being performed), the server 320 may determine that the difference between the users' tasks is 1. According to an embodiment of the disclosure, in operation 870, the server 320 may transmit the result of identification to the glasses-type wearable device 100. According to an embodiment of the disclosure, in operation 880, the glasses-type wearable device 100 may request the server 320 to transmit an augmented reality guide. According to an embodiment of the disclosure, the server 320 may transmit the augmented reality guide to the second user's glasses-type wearable device 310.) Regarding claim 8, Shin discloses a projector comprising: a sensor (180); a display device (150); and at least one processor programmed to execute (120): acquiring, by controlling the sensor of the projector (Fig. 5B, video through second user’s glasses tyep wearable device 310), identification information from a marker in which the identification information is recorded, the identification information associated with a plurality of procedures; projecting, by controlling the display device of the projector, a projection image concerning a first procedure of the plurality of procedures corresponding to the identification information; acquiring, by controlling the sensor of the projector, the identification information from the marker again (after slicing the potatoe; Fig. 5C, item 532- )after projecting the projection image concerning (slicing the potatoe is considered the first procedure) first procedure; and after acquiring the identification information again, projecting, by controlling the display device of the projector, a projection image concerning a second procedure of the plurality of procedures performed later than the first procedure, the second procedure corresponding to the identification information (After slice the potato, chop the spinach, Fig. 5C, item 532 or subsequent procedures include cutting instructions as depicted in Figures 7A and 7B). The reasons for combining Shin and Wyper using the identification information from a marker are the same as explained in the rejection of claim 1. Regarding claim 9, the claim is interpreted and rejected for the same reasons as the rejection of claims 1 and 8 and in addition, Shin discloses a non-transitory computer-readable storage medium storing a program, the program instructing a processing device of a projector (Fig. 1, items 120, 150; parag 0050). In addition, the reasons for combining Shin and Wyper using the identification information from a marker are the same as explained in the rejection of claims 1 and 8. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: McCoy et al. (US 2015/0310539) discloses a projector and camera located on a user whereby objects within a shopping environment are marked for ease of retail shopping experience. Yoshida et al. (US 6,334,684) discloses a projector located above a kitchen space designed to prompt a user with cooking instructions on display 7. Jun et al. (US 12,072,489) discloses a projector (150), figures 2D and 4 located on eyewear for identifying objects in an environment; col 3, lines 25-40). Bhogal (US 10,739,013) discloses an operation of cooking instructions based on user feedback using a marker on a food: PNG media_image3.png 465 657 media_image3.png Greyscale PNG media_image4.png 788 494 media_image4.png Greyscale Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Jennifer Mehmood whose telephone number is (571)272-2976. The examiner can normally be reached Monday through Friday from 8am to 5pm ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. The fax phone number for the organization where this application or proceeding is assigned is (571)272-4637. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent -center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JENNIFER MEHMOOD/Supervisory Patent Examiner, Art Unit 2664
Read full office action

Prosecution Timeline

Sep 27, 2022
Application Filed
Feb 05, 2025
Non-Final Rejection — §101, §102, §103
May 12, 2025
Response Filed
Jul 17, 2025
Non-Final Rejection — §101, §102, §103
Oct 22, 2025
Response Filed
Mar 19, 2026
Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12572774
NEURAL NETWORK PROCESSOR AND METHOD OF NEURAL NETWORK PROCESSING
2y 5m to grant Granted Mar 10, 2026
Patent 10269295
ORGANIC LIGHT EMITTING DISPLAY DEVICE AND DRIVING METHOD THEREOF
2y 5m to grant Granted Apr 23, 2019
Patent 9245189
OBJECT APPEARANCE FREQUENCY ESTIMATING APPARATUS
2y 5m to grant Granted Jan 26, 2016
Patent 8344909
METHOD AND SYSTEM FOR COLLECTING TRAFFIC DATA, MONITORING TRAFFIC, AND AUTOMATED ENFORCEMENT AT A CENTRALIZED STATION
2y 5m to grant Granted Jan 01, 2013
Patent 8294567
METHOD AND SYSTEM FOR FIRE DETECTION
2y 5m to grant Granted Oct 23, 2012
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

4-5
Expected OA Rounds
65%
Grant Probability
95%
With Interview (+30.6%)
3y 1m
Median Time to Grant
High
PTA Risk
Based on 247 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month