Prosecution Insights
Last updated: April 19, 2026
Application No. 18/422,872

SYSTEM AND METHOD FOR PROVIDING AN INTERACTION WITH REAL-WORLD OBJECT VIA VIRTUAL SESSION

Final Rejection §103
Filed
Jan 25, 2024
Examiner
GRAY, RYAN M
Art Unit
2611
Tech Center
2600 — Communications
Assignee
Samsung Electronics Co., Ltd.
OA Round
2 (Final)
88%
Grant Probability
Favorable
3-4
OA Rounds
2y 2m
To Grant
98%
With Interview

Examiner Intelligence

Grants 88% — above average
88%
Career Allow Rate
589 granted / 672 resolved
+25.6% vs TC avg
Moderate +11% lift
Without
With
+10.9%
Interview Lift
resolved cases with interview
Typical timeline
2y 2m
Avg Prosecution
18 currently pending
Career history
690
Total Applications
across all art units

Statute-Specific Performance

§101
7.4%
-32.6% vs TC avg
§103
68.4%
+28.4% vs TC avg
§102
8.3%
-31.7% vs TC avg
§112
3.5%
-36.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 672 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendments and Remarks Applicant's arguments filed 1/21/26 have been fully considered as follows: Applicant argues: Applicant respectfully submits that claim 1 is patentable because each and every element of the claim is not disclosed or suggested by the combined references. In the rejection of claim 5, the Office Action asserts that Pekelny discloses, at [0051], "wherein the predicting the at least one user action in the real world subsequent to the occurrence of the event comprises: obtaining a correlation between the detected event and the at least one real world object associated with the event; and predicting the at least one user action in the real world based on the correlation." Paragraph [0051] of Pekelny discloses: More generally, FIG. 6 shows two examples of dynamic conditions that might trigger the presentation of alert information. In other cases, the user 104 can configure the SPC to show alert information in response to other kinds of actions performed by (or events associated with) the user 104, and/or in response to other kinds of actions performed by an object in the physical environment 104. For example, the user 104 can configure the SPC to only show alert information associated with walls when the user 104 is walking through the physical environment 102. In another case, the user 104 can configure the SPC to show alert information for other people only when those people are walking through the physical environment 102, and so on. However, Applicant submits that Pekelny does not disclose or suggest "determining a correlation between the detected event and the at least one real-world object associated with the event; predicting at least one user action in the real world subsequent to the occurrence of the event based on a movement of the user in the real world for interacting with the at least one real- world object and the determined correlation between the detected event and the at least one real- world object; and generating an overlay of the at least one real-world object within an ongoing virtual session based on a position of at least one real-world object and a position of the user," as recited in claim 1. (Remarks, Page 10-11). Applicant’s argument is unpersuasive because the scope of correlation is not defined by the specification. In the same context, Pekelny considers rule based correlation between detected events and objects in the scene (such as user’s approaching a wall, user’s entering the field of view, etc.). These features can additionally be combined (such as with context of when the user is walking). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Use of indicates a limitation is not explicitly disclosed by the reference alone. Claim(s) 1-4, 9-14, 19-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Pekelny (US 20200026922) in view of Xu (US 2020/0043210) Claim 1 Pekelny discloses a method of a virtual reality device, the method comprising: detecting the at least one real-world object (Pekelny, ¶ 2: “the technique can detect one or more objects-of-interest based on preconfigured setting information. The technique then uses a scene analysis component to automatically detect the presence of identified objects-of-interest in the physical environment, while the user interacts with the virtual environment.”); detecting an occurrence of an event associated with the at least one real-world object in proximity of a user in a real world (Pekelny, ¶ 84: “specify alert-condition information…the alert-condition information specifies that the user 104 wishes to be alerted…may depend on a dynamic event in the physical environment 102 (e.g., an action performed by the user 104 or the object-of-interest). Other alert-condition information may depend on a state in the physical environment 102, such on opened or closed state of a door, etc.”); determining a correlation between the detected event and the at least one real-world object associated with the event (e.g. correlation between physical movement and need for collision detection; ¶ 51: “For example, the user 104 can configure the SPC to only show alert information associated with walls when the user 104 is walking through the physical environment 102”) predicting at least one user action in the real world subsequent to the occurrence of the event based on a movement of an user in the real world for interacting with the at least one real world object (Pekelny, ¶ 115: “Finally, the dynamic event detection component(s) 724 receives environment input information which describes a dynamic event that is occurring in the physical environment 102, such as video information that shows the person 108 performing a gesture, and/or controller input information that describes how the user 104 is currently interacting with one or more controllers. The dynamic event detection component(s) 724 determines whether this environment input information matches telltale information associated with known events”) and the determined correlation between the detected event and the at least one real-world object (alerting based on context; e.g. approaching walls, approaching other users, user’s entering room, etc; ¶ 10: “The VR device shows alert information which apprises the user of the existence of a person in the vicinity of the user at the current time, within the physical environment.”); and generating an overlay of the at least one real-world object within the ongoing virtual session (Pekelny, ¶ 44: “The SPC optionally displays virtual objects that have real-object counterparts in a special manner (e.g., with a glowing aura, etc.) to distinguish these objects from other parts of the virtual environment 202 (that do not have real-object counterparts)”) based on a (Pekelny, ¶ 119: “For instance, a tracking component 1208 determines the position and orientation of the VR device 106 in the physical environment, with respect to a world coordinate space”). PNG media_image1.png 818 539 media_image1.png Greyscale Pekelny does not explicitly disclose, but Xu makes obvious position of at least one real world object (Xu, ¶ 15: “receiving the position information of the real object in the space sent by the positioning chip via ultra-wideband (UWB) technology in real time; the position information includes three-dimensional coordinates of the real object.”) Before the effective filing date of this application, it would have been obvious to one of ordinary skill in the art to consider position detection as claimed. One of ordinary skill in the art would have motivation to consider UWB in order to improve real object tracking and improve interactivity “y controlling, according to the position information of the real object, a virtual object in an augmented reality (AR) scenario to move in the AR scenario according to a moving track of the real object, improving the user's interactive in the mixed reality combining the virtual and reality.” (¶ 41). One of ordinary skill in the art would have had a reasonable expectation of success because Pekelny considers tracking of real objects and could benefit from additional detection means and suggests application to other sensor means in addition to video. Claim 2 Pekelny does not explicitly disclose, but Xu makes obvious wherein the detecting the at least one real-world object comprises: transmitting a first ultra-wide band (UWB) signal in the proximity of the user; receiving a second UWB signal; determining a variation in the second UWB signal; and detecting the at least one real-world object based on the variation; wherein the second UWB signal is reflected from the at least one real-world object by the first UWB signal, and wherein the variation corresponds to a presence of at least one real-world object (Xu, ¶¶ 50-51: “In an embodiment, the ultra wideband (UWB) technology is a carrier-free communication technology using nanosecond to microsecond non-sinusoidal narrow pulses to transmit data, which is a revolutionary advancement in the radio field, and it is believed to be a mainstream technology for short-range wireless communication in the future. UWB was used in the early days for close-range high speed data transmission. In recent years, foreign countries have begun to use their sub-nanosecond ultra-narrow pulses for close-range accurate indoor positioning. In embodiments of the present application, the positioning chip may send (or broadcast) its own position information to a surrounding space by using the UWB technology, that is, send the position information of the real object in the space, and thus, the controller in the AR device, according to the obtained position information sent by the positioning chip using the UWB technology, determines the position information of the real object in the space. In an embodiment, the position information in the space is represented by three-dimensional coordinates. Therefore, the position information of the real object in the space obtained by the controller includes three-dimensional coordinates of the real object.”) Before the effective filing date of this application, it would have been obvious to one of ordinary skill in the art to consider UWB signals as claimed. One of ordinary skill in the art would have motivation to consider UWB in order to improve real object tracking. One of ordinary skill in the art would have had a reasonable expectation of success because Pekelny considers tracking of real objects and could benefit from additional detection means. Claim 3 Pekelny discloses wherein the detecting the at least one real-world object further comprises: determining a shape of the at least one real-world object based on the determined variation (Pekelny, ¶ 94: “In later stages, a convolutional component may apply a kernel that finds more complex shapes (such as shapes that resemble human noses, eyes, keyboards, etc.).”); determining at least one user parameter including at least one of a location, a time, or a previous activity (Pekelny, ¶ 70: “The video pass-through construction component can then determine the location at which the object-of-interest occurs in the physical environment 102 with respect to the user's current position”); and detecting the at least one real-world object in the real world based on the shape and the at least one user parameter (Pekelny, ¶ 70: “The video pass-through construction component can then determine the location at which the object-of-interest occurs in the physical environment 102 with respect to the user's current position”). Pekelny does not explicitly disclose application to UWB signals. Before the effective filing date of this application, it would have been obvious to one of ordinary skill in the art to consider UWB signals as claimed. One of ordinary skill in the art would have motivation to consider UWB in order to improve real object tracking. One of ordinary skill in the art would have had a reasonable expectation of success because Pekelny considers tracking of real objects and could benefit from additional detection means and suggests application to other sensor means in addition to video. Claim 4 Pekelny discloses wherein the detecting the occurrence of the event comprises: determining a spatial transformation in the at least one real-world object based on the positional coordinates (Pekelny, ¶ 67, 119: “determine whether a prescribed event has taken place in the physical environment 102 (e.g., corresponding to telltale movement of an object-of-interest, or the user 104 himself, etc.)… For instance, a tracking component 1208 determines the position and orientation of the VR device 106 in the physical environment, with respect to a world coordinate space”); and detecting the occurrence of the event in the real world based on the spatial transformation (e.g. movement into proximity; ¶ 40: “he SPC detects this person 108 and then presents alert information 204 which notifies the user 104 of the existence of the other person 108. In this case, the alert information 204 may include a visual representation of the surface of the other person's body.”); wherein the spatial transformation corresponds to at least one of a change in shape or a change in the positional coordinate (e.g. people moving to within 2 meters (see Fig. 8)). Pekelny does not explicitly disclose obtaining positional coordinates of the at least one real-world object based on the second UWB signal (Xu, ¶ 15: “receiving the position information of the real object in the space sent by the positioning chip via ultra-wideband (UWB) technology in real time; the position information includes three-dimensional coordinates of the real object.”) Before the effective filing date of this application, it would have been obvious to one of ordinary skill in the art to consider UWB signals as claimed. One of ordinary skill in the art would have motivation to consider UWB in order to improve real object tracking. One of ordinary skill in the art would have had a reasonable expectation of success because Pekelny considers tracking of real objects and could benefit from additional detection means and suggests application to other sensor means in addition to video. Claim 9 Pekelny discloses further comprising: detecting at least one movement of the user in the real world for interacting with the at least one real world object; and generating at least one virtual interaction in the ongoing virtual session based on the detected at least one movement of the user (e.g. interacting with a virtual ball or other game environment; ¶ 52: “For example, assume that the user is currently manipulating a handheld controller to simulate the swinging of a bat or tennis racket in the course of interacting with an immersive virtual game…the current virtual environment with which the user is interacting,”) Claim 10 Pekelny discloses comprising: determining an user context corresponding to at least one of a user current location, a time, or an environment in the real world, wherein a user is in the ongoing virtual session; and generating at least one virtual interaction in the ongoing virtual session based on the detected at least one movement of the user and the user context (e.g. context with virtual environment; context relative to the physical location, etc.; ¶ 52: “In another example, the SPC can display alert information which depends on the manner in which the user is currently using one or more handheld (or body-worn) controllers. For example, assume that the user is currently manipulating a handheld controller to simulate the swinging of a bat or tennis racket in the course of interacting with an immersive virtual game. The SPC can display alert information which depends on any combination of: the type of the controller that the user 104 is currently handling; the current position and/or orientation of the controller in the physical environment 102 (which can be detected by optical and/or magnetic signals emitted by the controller); the current movement of the controller (which can be detected by inertial sensors associated with the controller); the proximity of the controller to physical objects in the physical environment 102; the current virtual environment with which the user is interacting, and so on. The user may find the above-described alert information useful to avoid striking a physical object with the controller.”) Claim 11 The same teachings and rationales in claim 1 are appliable to claim 11, with Pekelny disclosing a virtual reality device comprising: at least one memory configured to store instructions; at least one processor configured to execute the instructions to: (Fig. 12). Claim 12 The same teachings and rationales in claim 2 are appliable to claim 12. Claim 13 The same teachings and rationales in claim 3 are appliable to claim 13. Claim 14 The same teachings and rationales in claim 4 are appliable to claim 14. Claim 19 The same teachings and rationales in claim 9 are appliable to claim 19. Claim 20 The same teachings and rationales in claim 10 are appliable to claim 20. Claim(s) 6, 16 is/are rejected under 35 U.S.C. 103 as being unpatentable over Pekelny (US 20200026922) in view of Xu (US 2020/0043210) and Harvey (US 2018/0173323) Claim 6 Pekelny does not disclose, but Harvey discloses further comprising: determining an action parameter indicating at least one of a duration of the at least one user action or a classification of the at least one user action based on the obtained correlation; and determining a privacy level of the at least one user action, wherein the privacy level corresponds to display restriction level of the at least one user action and the at least one real-world object for at least one other user sharing the same ongoing virtual session with the user (¶ 224: “In some embodiments, a user profile may be configured for each user, which may include content display locations, control schemas, and the like. For example, a first user may be relatively short and each zone can be configured in a location that is ergonomically beneficial to that user's relative size. Zone 2 may be positioned closer to the reference peripheral to accommodate the user's corresponding biomechanical range. Likewise, a second user that is relatively tall may have a user profile that accommodates that user's relative size. In some cases, a user profile may arrange one or more zones for a particular environment, as described above. A first user may have a first profile for office use (e.g., large surface area with far walls) and a second profile for working on a bus (e.g., close quarters and downward facing viewing area). Thus, user preferences and/or the work environment (e.g., wall location, lighting, proximity of other people, etc.) may be used to establish different working profiles to better optimize the augmented workstation environment. In some embodiments, aspects of AWE 1900 can detect a position/location of the user relative to reference peripheral 1830(1) and may identify zones for human ergonomic envelopes (e.g., head movement) that may have a different response to certain content. For instance, content may snap into ideal zones, or a wireframe outlines may be superimposed on an area/volume to highlight an ergonomically preferred configuration different from a current arrangement of zones. In some cases, zone placement may be user defined. For instance, a series of questions and/or displayed arrangements may be presented to a user to make decisions on how zones will be displayed, what content will be included in each zone, how zones can affect one another (e.g., whether or not content may be moved freely from one zone to the next), how to arrange zones in certain work environments, etc., enabling/disabling interactive controls based on applications (e.g., word processing may disable object manipulation controls, as one might use in a CAD tool). A user may determine how zones are shared between users and what privacy policies to apply. One of ordinary skill in the art with the benefit of this disclosure would appreciate the many variations, modifications, and alternative embodiments.”) Before the effective filing date of this application, it would have been obvious to one of ordinary skill in the art to consider privacy context as claimed. One of ordinary skill in the art would have motivation to adjust overlay content to allow user interaction subject to privacy preferences. One of ordinary skill in the art would have had a reasonable expectation of success because Pekelny considers overlay of a virtual depiction of a real item in context. Claim 16 The same teachings and rationales in claim 6 are appliable to claim 16. Claim(s) 7, 17 is/are rejected under 35 U.S.C. 103 as being unpatentable over Pekelny (US 20200026922) in view of Xu (US 2020/0043210) and Li (US 2017/0061696) Claim 7 Pekelny does not disclose, but Li discloses wherein the generating the overlay of the at least one real-world object within the ongoing virtual session comprises: obtaining spatial coordinates of the least one real world object corresponding to vertices of the at least one real-world object in the real world; obtaining a characteristic feature of the ongoing virtual session corresponding to a digital environment of the ongoing virtual session; associating the spatial coordinate with the characteristic feature; and generating the overlay of the at least one real-world object within the ongoing virtual session based on the association (¶ 97: “In an exemplary embodiment, the virtual reality display apparatus 200 may detect a feature point in the captured image, compare the detected feature point with a pre-stored feature point of the keyboard image, and detect the physical keyboard image. For example, coordinates of four corners of the physical keyboard may be determined according to the pre-stored feature point of the physical keyboard image and a coordinate of a feature point in the captured image matching a coordinate of the pre-stored feature point of the physical keyboard image. Subsequently, an outline of the physical keyboard may be determined according to the coordinates of the four corners in the captured image. As a result, the virtual reality display apparatus 200 may determine a keyboard image in the captured image. Here, the feature point may be a scale-invariant feature transform (SIFT) or another feature point. Accordingly, a coordinate of a point of an outline of any object (that is, a point on an outline of an object) in the captured image may be calculated in the same or similar method. Furthermore, it should be understood that the keyboard image may be detected from the captured image in another method.”) Before the effective filing date of this application, it would have been obvious to one of ordinary skill in the art to consider vertices. One of ordinary skill in the art would have motivation to match virtual content to the real object in scene. One of ordinary skill in the art would have had a reasonable expectation of success because Pekelny considers overlay of a virtual depiction of a real item in context. Claim 17 The same teachings and rationales in claim 7 are appliable to claim 17. Claim(s) 8, 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Pekelny (US 20200026922) in view of Xu (US 2020/0043210) and Krol (US 2024/0040086) Claim 8 Pekelny does not disclose, but Krol discloses wherein the generating the overlay of the at least one real world object (¶ 45: “Interface 100 includes avatars 102A and B, which each represent different participants to the videoconference. Avatars 102A and B, respectively, are representations of participants to the videoconference. The representation may be a two-dimensional or three-dimensional model. The two- or three-dimensional model may have texture mapped video streams 104A and B from devices of the first and second participant. A texture map is an image applied (mapped) to the surface of a shape or polygon”) within the ongoing virtual session comprises obtaining a scaling factor regarding size of the overlay of the at least one real world object and position coordinates of the at least one real world object; and generating the overlay of the at least one real-world object within the ongoing virtual session based on the scaling factor and position coordinates (Krol, ¶ 12: “First, the method proceeds by evaluating whether a position, rotation or scale of an object of represented by the respective node in the tree hierarchy needs to be updated. When the position, rotation and scale of the object needs to be updated, the method then transforms the object. The method continues by determining whether the object is labeled as fixed. When determining whether the object is not labeled as fixed, the evaluating and transforming is repeated for children of the respective node. The method concludes when determining whether the object is labeled as fixed and the position, rotation and scale of the object does not need to be updated by halting the evaluating and transforming for children of the respective node.”) Before the effective filing date of this application, it would have been obvious to one of ordinary skill in the art to consider scale as claimed. One of ordinary skill in the art would have motivation to adjust overlay content to match the size of the real scene. One of ordinary skill in the art would have had a reasonable expectation of success because Pekelny considers overlay of a virtual depiction of a real item in context. Claim 18 The same teachings and rationales in claim 8 are appliable to claim 18. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to RYAN M GRAY whose telephone number is (571)272-4582. The examiner can normally be reached on Monday through Friday, 9:00am-5:30pm (EST). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kee Tung can be reached on (571)272-7794. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see https://ppair-my.uspto.gov/pair/PrivatePair. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /RYAN M GRAY/Primary Examiner, Art Unit 2611
Read full office action

Prosecution Timeline

Jan 25, 2024
Application Filed
Oct 16, 2025
Non-Final Rejection — §103
Jan 21, 2026
Response Filed
Feb 04, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597216
ARTIFICIAL INTELLIGENCE VIRTUAL MAKEUP METHOD AND DEVICE USING MULTI-ANGLE IMAGE RECOGNITION
2y 5m to grant Granted Apr 07, 2026
Patent 12586252
METHOD FOR ENCODING THREE-DIMENSIONAL VOLUMETRIC DATA
2y 5m to grant Granted Mar 24, 2026
Patent 12572892
SYSTEMS AND METHODS FOR VISUALIZATION OF UTILITY LINES
2y 5m to grant Granted Mar 10, 2026
Patent 12561928
SYSTEMS AND METHODS FOR CALCULATING OPTICAL MEASUREMENTS AND RENDERING RESULTS
2y 5m to grant Granted Feb 24, 2026
Patent 12542946
REMOTE PRESENTATION WITH AUGMENTED REALITY CONTENT SYNCHRONIZED WITH SEPARATELY DISPLAYED VIDEO CONTENT
2y 5m to grant Granted Feb 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
88%
Grant Probability
98%
With Interview (+10.9%)
2y 2m
Median Time to Grant
Moderate
PTA Risk
Based on 672 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month