Prosecution Insights
Last updated: April 19, 2026
Application No. 19/209,990

RECONFIGURING REALITY USING A REALITY OVERLAY DEVICE

Final Rejection §103§DP
Filed
May 16, 2025
Examiner
GALKA, LAWRENCE STEFAN
Art Unit
2621
Tech Center
2600 — Communications
Assignee
Samsung Electronics Co., Ltd.
OA Round
2 (Final)
76%
Grant Probability
Favorable
3-4
OA Rounds
2y 11m
To Grant
95%
With Interview

Examiner Intelligence

Grants 76% — above average
76%
Career Allow Rate
649 granted / 851 resolved
+14.3% vs TC avg
Strong +19% interview lift
Without
With
+18.6%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
28 currently pending
Career history
879
Total Applications
across all art units

Statute-Specific Performance

§101
11.1%
-28.9% vs TC avg
§103
35.3%
-4.7% vs TC avg
§102
25.6%
-14.4% vs TC avg
§112
18.3%
-21.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 851 resolved cases

Office Action

§103 §DP
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application is being examined under the pre-AIA first to invent provisions. Response to Amendment Applicant’s submission of a response on 1/2/26 has been received and considered. In the response, Applicant amended claims 1-20. Therefore, claims 1-20 are pending. Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13. The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer. Claims 1, 4, 6 and 7 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 8, 8, 8 and 8 of U.S. Patent No. 12,311,261. Although the claims at issue are not identical, they are not patentably distinct from each other because the respective claims of the ‘261 patent anticipate the instant claims. Claims 1, 2, 6, 8, 9, 13, 15, 16 and 20 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1, 1, 1, 16, 16, 16, 11, 11 and 11 of U.S. Patent No. 11,691,080. Although the claims at issue are not identical, they are not patentably distinct from each other because the respective claims of the ‘080 patent anticipate the instant claims. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action. The factual inquiries for establishing a background for determining obviousness under pre-AIA 35 U.S.C. 103(a) are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims under pre-AIA 35 U.S.C. 103(a), the examiner presumes that the subject matter of the various claims was commonly owned at the time any inventions covered therein were made absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and invention dates of each claim that was not commonly owned at the time a later invention was made in order for the examiner to consider the applicability of pre-AIA 35 U.S.C. 103(c) and potential pre-AIA 35 U.S.C. 102(e), (f) or (g) prior art under pre-AIA 35 U.S.C. 103(a). Claims 1, 2, 4-9, 11-16 and 18-20 is/are rejected under pre-AIA 35 U.S.C. 103(a) as being unpatentable over Watanabe et al. (pub. no. 20060079324) in view of Biocca et al. (pub. no. 20080266323) and Mullen (pub. no. 20060105838). Regarding claim 1, Watanabe discloses a method comprising: capturing at least one image including a plurality of physical entities in an area of a physical environment viewable in a device by at least one camera of the device (“Firstly, a card 54 used in this video game system 10 will be explained. This card 54 has a size and a thickness being the same as a card used in a general card game. As shown in FIG. 2A, on the front face, there is printed a picture representing a character being associated with the card 54. As shown in FIG. 2B, the identification image 56 is printed on the reverse side”, [0063]; “As shown in FIG. 3A and FIG. 3B, the functions implemented by the video game system is to pick up images by the CCD camera 42, for example, of six cards 541, 542, 543, 544, 545, and 546, which are placed on a desk, table or the like 52, to display thus picked up images in the monitor 30. Simultaneously, on the images of the cards 541 to 546 displayed in the screen of the monitor 30, for example, on the identification images 561, 562, 563, 564, 565, and 566 respectively attached to the cards 541, 542, 543, 544, 545, and 546, images of objects (characters) 701, 702, 703, 704, 705, and 706 are displayed respectively associated with the identification images 561 to 566 of the cards 541 to 546 in such a manner as being superimposed thereon. According to the displaying manner as described above, it is possible to achieve a game which is a mixture of a game and a video game. Here, the "character" indicates an object such as a human being, an animal, and a hero or the like who appears in a TV show, animated movie, and the like”, [0068]; identifying, by at least one processor of the device, from the plurality of physical entities in the captured at least one image, a physical entity with which a virtual image is displayed based on the physical entity corresponding to a pre-stored physical entity in the device (“Firstly, the card recognition program 82 is to perform processing for recognizing the identification image 561 of the card (for example, card 541 in FIG. 3A) placed on the desk, table, or the like 52, so as to specify a character image (for example image 701 in FIG. 3B) to be displayed on the identification image 561”, [0073]); determining, by the at least one processor of the wearable device, a position of the identified physical entity in the area based on the captured at least one image (“As described above, the reference cell finding function 104 finds out image data of the reference cell 62 of the logo part 58 from the image data drawn in the image memory 20 (pickup image data), and detects a position of the image data of the reference cell 62. The position of the image data of the reference cell 62 is detected as a screen coordinate”, [0075]; “As shown in FIG. 8, the camera coordinate detection function 108 obtains a camera coordinate system (six axial directions: x, y, z, Θx, Θy and Θz) having a camera viewing point C0 as an original point based on the detected screen coordinate and focusing distance of the CCD camera 42. Then, the camera coordinate of the identification image 561 at the card 541 is obtained. At this moment, the camera coordinate at the center of the logo part 58 in the card 541 and the camera coordinate at the center of the code part 60 are obtained”, [0077]); identifying a virtual entity corresponding to the identified physical entity from among at least one pre-stored virtual entity in the device (“In the present embodiment, an association table is registered, which associates various 2D patterns of the identification cells 66 with the identification numbers respectively corresponding to the patterns, for example, in a form of database 68 (2D code database, see FIG. 6 and FIG. 7), in the hard disk 44, optical disk 32, and the like. Therefore, by collating a detected 2D pattern of the identification cells 66 with the association table within the database 68, the identification number of the card 54 is easily detected”, [0067]; “As shown in FIG. 7, the identification information detecting function 106 detects image data of the corner cells 64 based on the position of the image data of the reference cell 62 having been detected. Image data of the area 112 formed by the reference cell 62 and the corner cells 64 is subjected to affine transformation, assuming the image data as being equivalent to the image 114 which is an image viewing the identification image 561 of the card 541 from upper surface thereof, and 2D pattern of the code part 60, that is, code 116 made of 2D patterns of the corner cells 64 and the identification cells 66 is extracted. Then, identification number and the like are detected from thus extracted code 116. As described above, detection of the identification number is carried out by collating the code 116 thus extracted with the 2D code database 68”, [0076]; “For example as shown in FIG. 9, a large number of records are arranged to constitute elements of the object information table 118, and in one record, an identification number, basic parameters (experiential value, level), a storage head address of character image data (level 1), parameters of level 1 (physical energy, offensive power, defensive power, and the like), a storage head address of character image data (level 2), parameters of level 2 (physical energy, offensive power, defensive power, and the like), a storage head address of character image data (level 3), parameters of level 3 (physical energy, offensive power, defensive power, and the like), and a valid/invalid bit are registered”, [0080]); and displaying at least one virtual image of the identified virtual entity at a predefined position relative to the determined position of the physical entity, wherein the displaying the at least one virtual image of the identified virtual entity comprises displaying the at least one virtual image of the virtual entity to appear at a distance of the physical entity from the device (“As shown in FIG. 3A and FIG. 3B, the functions implemented by the video game system is to pick up images by the CCD camera 42, for example, of six cards 541, 542, 543, 544, 545, and 546, which are placed on a desk, table or the like 52, to display thus picked up images in the monitor 30. Simultaneously, on the images of the cards 541 to 546 displayed in the screen of the monitor 30, for example, on the identification images 561, 562, 563, 564, 565, and 566 respectively attached to the cards 541, 542, 543, 544, 545, and 546, images of objects (characters) 701, 702, 703, 704, 705, and 706 are displayed respectively associated with the identification images 561 to 566 of the cards 541 to 546 in such a manner as being superimposed thereon”, [0068]; “It is a matter of course that, as shown in FIG. 4A, an image of the user 74 who holds one card 542, for example, is picked up, so as to be seen in the monitor 30, thereby as shown in FIG. 4B, displaying the image of the user 74, the identification image 562 of the card 542, and the character image 702. Accordingly, it is possible to create a scene such that a character is put on the card 542 held by the user 74”, [0069]). Regarding claim 1, it is noted that Watanabe does not disclose a display that is a wearable device. Biocca however, teaches a display that is a wearable device (“Augmented reality (hereinafter "AR") is the modification of human perception of the environment through the use of computer-generated virtual augmentations. AR realizations include modifications of video to include virtual elements not present in the original image, computer displays with cameras mounted on the head, so as to simulate the appearance of a see-through display, and head-mounted displays that overlay computer generated virtual content onto a user's field of vision. Augmented reality displays allow for the display of information as if it were attached to objects in the world or free-floating as if in space. Head mounted display technologies include see-through displays that optically compose computer-generated augmentations with the user's field of view, displays where a user is viewing the world through a monitor and the augmentations are electronically combined with real-world imagery captured by a camera, and retinal scan displays or other embodiments that compose the virtual annotations with the real-world imagery on the retina of the eye. In all cases, virtual elements are added to the world as perceived by the user. A key element of augmented reality systems is the ability to track objects in the real world. AR systems overlay virtual content onto images from the real world. In order to achieve the necessary registration between the virtual elements and real objects, a tracking system is required. Tracking is the determination of the pose (position and orientation) of an object or some part of the user in space. As an example, a tracking system may need to determine the location and orientation of the hand so as to overlay a menu onto the image of the hand as seen by a mobile AR user. Tracking is responsible for determining the position of the hand, so that graphics can be rendered accurately. One approach to tracking is the placement of a pattern onto the object that is to be tracked. This pattern, sometimes referred to as a fiducial or marker, is captured by a camera, either in the image to be augmented or by a dedicated tracking system. The pattern is unique in the environment and designed to provide a tracking system with sufficient information to locate the pattern reliably in the image and accurately determine the pose of the pattern and, thereby, the pose of the object that pattern is attached to”, [0002] – [0005]). Exemplary rationales that may support a conclusion of obviousness include use of a known technique to improve similar devices (methods, or products) in the same way. Here both Watanabe and Biocca are directed to augmented reality systems. To the use the head mounted display of Biocca as the display in the Watanabe invention would be to use a known technique to improve a similar method in the same way. Therefore, it would have been obvious to a person having ordinary skill in the art as of the date of the claimed invention to modify Watanabe to use the head mounted display as taught by Biocca. To do so would provide more seamless experience thereby increasing the perceived entertainment value of the invention. In addition, it is noted that Watanabe does not explicitly disclose a size of the displayed at least one virtual image of the virtual entity is determined based on the determined position of the physical entity. Mullen however, teaches a size of the displayed at least one virtual image of the virtual entity is determined based on the determined position of the physical entity (“FIG. 10 shows virtual characters 1010 and 1060 generated on display screens 1001 and 1051. Depending on the distance of virtual characters (either controlled by an opponent or computer controlled) from the display (or locating device on the system), the size of virtual characters that are shown may be manipulated. In this manner, a user is provided augmented reality indicia that is scaled to the perspective of that user (e.g., height, pitch, roll, distance and location of the perspective). As such a true three-dimensional virtual object/character can be provided whose size scales according to the height, pitch, roll, distance, and location of the perspective to the virtual object/character (e.g., the perspective of a user)”, [0080]). Exemplary rationales that may support a conclusion of obviousness include use of a known technique to improve similar devices (methods, or products) in the same way. Here both Watanabe and Mullen are directed to augmented reality systems. To the use the scaling based on distance of Mullen in the Watanabe invention would be to use a known technique to improve a similar method in the same way. Therefore, it would have been obvious to a person having ordinary skill in the art as of the date of the claimed invention to modify Watanabe to use the distance based scaling as taught by Mullen. To do so would provide a more intuitive user interface thereby increasing the perceived entertainment value of the invention. Regarding claim 2, the combination of Watanabe and Mullen discloses determining a distance in relation to the physical entity (Watanabe: [0075] & [0077]), wherein the size of the displayed at least one virtual image of the virtual entity is scaled based on the determined distance in relation to the physical entity (Mullen: [0080]). Regarding claim 4, Watanabe discloses the at least one virtual image of the identified virtual entity is displayed by overlaying the at least one virtual image of the virtual entity on a captured area of the physical environment at the predefined position relative to the position of the identified physical entity, such that both the at least one virtual image of the virtual entity and at least portions of the captured area of the physical environment are simultaneously viewable in the wearable device ([0068]). Regarding claim 5, Watanabe discloses the at least one pre-stored virtual entity comprises the virtual entity configured to be displayed based on detection of pre-stored physical entity and another virtual entity configured to be displayed based on detection of another pre-stored physical entity, different from the pre-stored physical entity (“If the image data of the reference cell 62 exists, as shown in FIG. 12A for example, assuming the case where two cards 542 and 543 are arranged in the lateral direction on the side of the card 541, the processing proceeds to step S4 in FIG. 15, and the identification information detecting function 166 allows the image data of the region formed by the reference cell 62 and the corner cells 64 to be subjected to affine transformation. Then, the identification numbers associated with the cards 542 and 543 respectively based on the identification images 562 and 563 of the cards 542 and 543 are detected. Detection of the identification number is carried out by collating the codes extracted from the identification images 562 and 563 with the 2D code database 68. Then, in step S5, the character image searching function 168 searches the object information table 118 for character image data 120 based on each of the identification number and the like of the cards 542 and 543. For example, records respectively associated with the identification numbers are searched out from the object information table 118, and if these records thus searched out are "valid", the image data 120 is read out from the storage head address corresponding to the current level out of the multiple storage head addresses registered in each of the records”, [0108] & [0109]). Regarding claim 6, Watanabe discloses the predefined position relative to the position of the identified physical entity is at least a portion of the physical entity, such that the virtual entity is displayed to be overlayed on the at least the portion of the physical entity ([0068]). Regarding claim 7, Watanabe discloses detecting a stationary physical entity in the captured at least one image, the stationary physical entity being detected by determining that the stationary physical entity corresponds to another pre-stored physical entity; and displaying an image of another virtual entity at another predefined position relative to a position of the detected stationary physical entity, such that both the at least one virtual image of the virtual entity and the image of the other virtual entity are simultaneously viewable in the wearable device ([0108] & [0109]). Claims 8, 9 and 11, 14 are directed to the device that implements the methods of claims 1, 2, and 4-7 respectively and are rejected for the same reasons as claims 1, 2, and 4-7 respectively. Claims 15, 16 and 18-20 are directed to an article of manufacture containing code that implements the methods of claims 1, 2, and 4-6 respectively and are rejected for the same reasons as claims 11, 2, and 4-6 respectively. Allowable Subject Matter Claims 3, 10 and 17 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Response to Arguments Applicant’s arguments filed on January 02, 2026 have been fully considered but they are not entirely persuasive. On pages 12-14, Applicant argues that the amended claims overcome the prior art of record because Watanabe does not disclose scaling the virtual image based on the distance of the associated physical entity. Examiner agrees. However, it would have been obvious given the teachings of Mullen as detailed above. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to LAWRENCE STEFAN GALKA whose telephone number is (571)270-1386. The examiner can normally be reached M-F 6-9 & 12-5. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, David Lewis can be reached at 571-272-7673. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /LAWRENCE S GALKA/Primary Examiner, Art Unit 3715
Read full office action

Prosecution Timeline

May 16, 2025
Application Filed
Jun 27, 2025
Non-Final Rejection — §103, §DP
Aug 25, 2025
Interview Requested
Sep 03, 2025
Applicant Interview (Telephonic)
Sep 03, 2025
Examiner Interview Summary
Jan 02, 2026
Response Filed
Mar 12, 2026
Final Rejection — §103, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12589294
SYSTEMS AND METHODS FOR ELECTRONIC GAME CONTROL WITH VOICE DETECTION AND AUDIO STREAM PROCESSING
2y 5m to grant Granted Mar 31, 2026
Patent 12576334
RECEPTION APPARATUS, TRANSMISSION APPARATUS, AND INFORMATION PROCESSING METHOD
2y 5m to grant Granted Mar 17, 2026
Patent 12569764
INPUT ANALYSIS AND CONTENT ALTERATION
2y 5m to grant Granted Mar 10, 2026
Patent 12569756
CLOUD APPLICATION-BASED DEVICE CONTROL METHOD AND APPARATUS, ELECTRONIC DEVICE AND READABLE MEDIUM
2y 5m to grant Granted Mar 10, 2026
Patent 12573270
CONTROLLING A USER INTERFACE
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
76%
Grant Probability
95%
With Interview (+18.6%)
2y 11m
Median Time to Grant
Moderate
PTA Risk
Based on 851 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month