Prosecution Insights
Last updated: April 19, 2026
Application No. 18/942,752

IMMERSIVE SYSTEM AND DISPLAYING METHOD

Final Rejection §103
Filed
Nov 10, 2024
Examiner
ELAHI, TOWFIQ
Art Unit
2625
Tech Center
2600 — Communications
Assignee
HTC Corporation
OA Round
2 (Final)
79%
Grant Probability
Favorable
3-4
OA Rounds
2y 7m
To Grant
94%
With Interview

Examiner Intelligence

Grants 79% — above average
79%
Career Allow Rate
565 granted / 714 resolved
+17.1% vs TC avg
Strong +15% interview lift
Without
With
+15.2%
Interview Lift
resolved cases with interview
Typical timeline
2y 7m
Avg Prosecution
24 currently pending
Career history
738
Total Applications
across all art units

Statute-Specific Performance

§101
2.3%
-37.7% vs TC avg
§103
60.7%
+20.7% vs TC avg
§102
19.1%
-20.9% vs TC avg
§112
10.5%
-29.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 714 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-9, 11-19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Chen (US 20200379576) in view of Shutzberg (EP 4471559). Regarding claim 1 Chen teaches an immersive system, comprising: a tracking device, configured to generate pose data about the tracking device ([0021] VR and/or AR systems include a hand tracker that can track a user's hands. The hand tracker may track one of the user's hands or both of the user's hands. The hand tracker may be a component of the HMD or may be separate from the HMD. In some implementations, the hand tracker performs key point detection to track one or more points of a user's hand, such as finger tips, knuckles, etc.); and a head-mounted display device, comprising a displayer, a communication circuit and a processing circuit (fig.1), the displayer configured to display an immersive content ([0013] FIG. 4 is a third-person view of a physical space in which a user is experiencing an immersive environment through the HMD of FIG. 1, [0067] immersive environment 402 and visual entities 408), the communication circuit configured to establish a wireless connection to the tracking device, the processing circuit coupled to the displayer and the communication circuit (fig. 10, [0080] also see fig.1), the processing circuit configured to: in response to the wireless connection being established between the tracking device and the head-mounted display device (fig.1, fig. 10, [0080]), compute an appropriate hand position in front of the head-mounted display device [[without referring to the pose data]] (0068]); render a virtual model in the immersive content corresponding to the tracking device based on the appropriate hand position prior to stabilization of the pose data received ([0068]) from the tracking device (fig.1, 128); and correct or determine a position and an orientation of the virtual model in the immersive content based on the pose data received from the tracking device in response to that the pose data has stabilized ([0020] [0033] also see fig.2). Chen is silent on compute an appropriate hand position in front of the head-mounted display device without referring to the pose data. However, Shutzberg teaches compute an appropriate hand position (fig. 50B, In some implementations, a processor performs a method by executing instructions stored on a computer readable medium. The method involves obtaining hand data associated with a position of a hand in a 3D space. The hand data may be obtained based on first sensor data, e.g., using outward facing image sensors on an HMD) device without referring to the pose data (In the above example, the application 5040 may be provided with additional information. For example, the application 5040 may receive information about the location of the pinching hand, e.g., a manipulator pose. Such hand information may be higher level than the raw hands data. For example, the application 5040 may receive a manipulator pose that identifies the position and/or orientation of the hand within 3D space without receiving information about the hand's configuration and/or information about a 3D model (e.g., of joints) i.e. indicative of hand pose, used to represent the hand's positions, pose, and/or configuration in 3D space). Therefore, it would have been obvious to one of the ordinary skilled in the art to combine Chen in light of Shutzberg teaching so that it may include compute an appropriate hand position in front of the head-mounted display device without referring to the pose data. The motivation is to provide devices, systems, and methods that interpret user activity as user interactions with user interface (UI) elements positioned within a three-dimensional (3D) space such as an extended reality (XR) environment. Regarding claim 2 Chen teaches wherein the head-mounted display device comprises a camera (fig. 3A, 380), the camera is configured to capture an image in front of the head-mounted display device, and the processing circuit is configured to execute a computer vision algorithm based on the image to track the appropriate hand position ([0061]). Regarding claim 3 Chen teaches wherein the processing circuit is configured to execute the computer vision algorithm based on the image to generate at least one hand position about at least one hand relative to the head-mounted display device, and the processing circuit is configured to compare the at least one hand position with predefined criteria and determine an appropriate hand position to display the virtual model on ([0061] fig. 4, [0068] FIG. 4, in which the user U is experiencing the immersive environment 402 through the HMD 104. In FIG. 5, the user U continues to interact with an immersive painting environment. However, in this example, the user U is now holding the handheld electronic device 302 up in front of the user. The position of the handheld electronic device 302 causes the computing device 102 to select a ray-based intersection mode as the interaction mode. In this example, multiple hand properties are determined in the ray-based intersection mode. First, the position of the user's right hand (RH) is determined and mapped i.e. predefined criteria, to a virtual hand position in the immersive environment 402. Additionally, because the ray-based intersection mode has been selected, a pointing direction of a finger of the user's right hand is determined. A virtual ray 502 is generated within the immersive environment 402 extending in the determined pointing direction from the virtual hand position. As the user's hand moves in the physical world, the virtual ray 502 will move in the immersive environment 402 allowing the user to point to and interact with various user interface elements of a user interface entity 504. In some implementations, the user interface entity 504 is a virtual entity that is displayed in the immersive environment 402 based on a determined interaction mode, when the user performs a specific action (e.g., holding up the handheld electronic device 302), or when the user orients toward a specific position in the immersive environment 402). Regarding claim 4 Chen teaches wherein the predefined criteria comprises a proximity distance or a positional range to determine the appropriate hand position ([0113]). Regarding claim 5 Chen teaches wherein the tracking device comprises an inertial measurement unit (fig.1, IMU 132), a motion sensor, an optical tracking sensor, a gyroscope, an accelerometer or a magnetometer for generating the pose data ([0041]). Regarding claim 6 Chen teaches wherein the tracking device is a hand-held controller, a wearable tracker or a tracking attachment (fig.1). Regarding claim 7 Chen teaches wherein whether the pose data has stabilized is determined by the processing circuit according to a variation of the pose data received from the tracking device ([0024]). Regarding claim 8 Chen teaches wherein whether the pose data has stabilized ([0024]) is determined by the processing circuit according to whether a predetermined time length expires since the wireless connection is established ([0036]). Regarding claim 9 Chen teaches wherein the virtual model represents a user interface element that is displayed in response to user interaction ([0031] [0068]). Regarding claim 11 Chen in view of SHUTZBERG teaches a displaying method, suitable for displaying a virtual model, the displaying method (fig.2) comprising: establishing a wireless connection ([0036]) between a tracking device and a head-mounted display device (fig. 1,); transmitting pose data generated by the tracking device to the head-mounted display device ([0021, [0023] fig. 4) via the wireless connection (fig.1); computing an appropriate hand position in front of the head-mounted display device without referring to the pose data (0068]); rendering a virtual model in an immersive content corresponding to the tracking device based on the appropriate hand position prior to stabilization of the pose data received (SHUTZBERG: fig.6, 5505 and 5515) from the tracking device (fig.1, 128); and correcting or determining a position and an orientation of the virtual model based on the pose data received from the tracking device in response to that the pose data has stabilized ([0020] [0033] also see fig.2). Regarding claim 12 Chen teaches capturing an image by a camera (fig. 3A, 380) of the head-mounted display device; and executing a computer vision algorithm based on the image by a processing circuit of the head-mounted display device to track the appropriate hand position [0061]. Regarding claim 13 the limitations are like claim 3 so rejected same way. Regarding claim 14 the limitations are like claim 4 so rejected same way. Regarding claim 15 the limitations are like claim 5 so rejected same way. Regarding claim 16 the limitations are like claim 6 so rejected same way. Regarding claim 17 the limitations are like claim 7 so rejected same way. Regarding claim 18 the limitations are like claim 8 so rejected same way. Regarding claim 19 the limitations are like claim 9 so rejected same way. Allowable Subject Matter Claims 10, 20 objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Response to Arguments Applicant's arguments filed 12/22/ have been fully considered but they are not persuasive. Applicant argues: Applicant argues that the secondary reference Shutzberg (EP 4471559) does not qualify as prior art as its publication date does not predate the provisional application priority date. Examiner responds: The Examiner respectfully disagrees. Shutzberg reference submitted to teach “compute an appropriate hand position in front of the head-mounted display device without referring to the pose data.” This limitation constitutes a negative limitation i.e. “without referring to the pose data”. stating what the invention does not do rather what it does and as such, according to MPEP, has to have an explicit support in the original specification. See MPEP 2173.05(i). Examiner submits that provisional application specification does not have explicit support for it. Therefore, provisional application priority date for these limitations is not valid. Therefore, secondary reference Shutzberg (EP 4471559) qualify as prior art. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. -Silkin US 2017003738 THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to TOWFIQ ELAHI whose telephone number is (571)270-1687. The examiner can normally be reached M-F: 10AM-3PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, William Boddie can be reached at (571)272-0666. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /TOWFIQ ELAHI/Primary Examiner, Art Unit 2625
Read full office action

Prosecution Timeline

Nov 10, 2024
Application Filed
Sep 24, 2025
Non-Final Rejection — §103
Dec 22, 2025
Response Filed
Mar 06, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602136
DISPLAY DEVICE
2y 5m to grant Granted Apr 14, 2026
Patent 12603037
DISPLAY DEVICE AND METHOD OF DRIVING THE SAME
2y 5m to grant Granted Apr 14, 2026
Patent 12579925
METHOD AND SYSTEM FOR TRANSMITTING DATA, TIMING CONTROLLER, AND SOURCE DRIVER CHIP
2y 5m to grant Granted Mar 17, 2026
Patent 12572029
AIR FLOATING VIDEO DISPLAY APPARATUS
2y 5m to grant Granted Mar 10, 2026
Patent 12572205
USER INTERFACE DEVICE FOR ROBOTS, ROBOT SYSTEMS AND RELATED METHODS
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
79%
Grant Probability
94%
With Interview (+15.2%)
2y 7m
Median Time to Grant
Moderate
PTA Risk
Based on 714 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month