DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 2, 9 and 16 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claim 2 recites the limitation "the shorter distance" in line 7. There is insufficient antecedent basis for this limitation in the claim. Claim 9 and 16 recites similar limitations, thus are rejected.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-4, 6-11, 13-17, 19-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Smith (US 2018/0173404 A1) in view of Osman et al. (US 2018/0311583 A1).
Regarding claim 1, Smith teaches:
A method comprising:
obtaining real-time video captured using one or more imaging sensors of an immersive headset; ([0054], “The technique 400 involves capturing live video of the real world environment using a video camera on a head-mounted display, as shown in block 401.”)
processing the real-time video to identify one or more real-world objects and render the one or more real-world objects on at least one display of the immersive headset; ([0055], “Technique 400 further involves identifying real world objects in individual frames of the live video that are in a subset of real world objects selected by the user, as shown in block 402. In one example, an algorithm is executed to analyze each frame of the live view to identify objects within each frame. Such an algorithm can use information about each of the objects in the subset. For example, based on a user identifying that a chair should be included in the user experience, the algorithm can perform image detection to identify whether the live view has any objects that match the characteristics (e.g., size, shape, etc.) of the chair selected by the user. In one embodiment, the object detection involves the use of a 3D model of the real world environment and/or the real world objects in the environment. This allows the real world objects in the live view to be identified from different perspectives. For example, a 3D model of a chair can be created based on images of the chair captured by the user viewing the chair using a HDM to view and select the chair from different viewing directions, using an HDM that has multiple cameras, or using a 3D model generation technique that determines a model of an object from a single image or viewing direction. The 3D model of the chair or other object can then be used to identify the chair in the live view frames and ensure that the chair is not replaced with virtual reality content.”)
allowing a user of the immersive headset to select at least one of the one or more real-world objects or at least one spatial volume containing the at least one real-world object; ([0055], “Technique 400 further involves identifying real world objects in individual frames of the live video that are in a subset of real world objects selected by the user, as shown in block 402. In one example, an algorithm is executed to analyze each frame of the live view to identify objects within each frame. Such an algorithm can use information about each of the objects in the subset. For example, based on a user identifying that a chair should be included in the user experience, the algorithm can perform image detection to identify whether the live view has any objects that match the characteristics (e.g., size, shape, etc.) of the chair selected by the user. In one embodiment, the object detection involves the use of a 3D model of the real world environment and/or the real world objects in the environment. This allows the real world objects in the live view to be identified from different perspectives. For example, a 3D model of a chair can be created based on images of the chair captured by the user viewing the chair using a HDM to view and select the chair from different viewing directions, using an HDM that has multiple cameras, or using a 3D model generation technique that determines a model of an object from a single image or viewing direction. The 3D model of the chair or other object can then be used to identify the chair in the live view frames and ensure that the chair is not replaced with virtual reality content.”) and
displaying an extended reality view on the at least one display while overlaying, on the extended reality view, a representation of at least a portion of the at least one real-world object or the at least one spatial volume ([0065], “FIG. 8 depicts alternative user experiences based on the selection of real world objects using the selection view of FIGS. 5-7. In the first user experience 801, the user-selected real world objects, including the desk 702, monitors 703, 704, laptop 705, phone 706, keyboard 707, mouse 708, printer 709, shelf 710, trash can 711, and chair 712, are visible. However, the other portions of the live view are replaced with virtual reality content 803.”)
However, Smith does not, but Osman teaches:
Displaying … a representation of at least a portion of the at least one real-world object or the at least one spatial volume in one of multiple modes that each show the representation differently. ([0138], “In one embodiment, the transition from non-transparent mode to semi-transparent mode may be initiated at the HMD to provide a safe zone to a user for watching/interact with content rendered on the display screen of the HMD, including game scenes from gameplay. For example, a user who is fully immersed in gameplay of a video game may be moving around in a room while still engaged in the video gameplay. As a result, the user may get closer to an object, such as a wall, a lamp, a table, a chair, a sofa, a bed, a cord, a pet, a person, etc., in the room. In order to prevent the user from bumping into the object, from being tangled in a wire/cord of the HMD, from getting hurt, or from causing damage to the object, the game processing module in the HMD detects the user's proximity to the object. In response to the user's movement and actions the system may initiate a transition of the display screen of the HMD from non-transparent to semi-transparent mode, e.g., when the user is coming close to an object. In the semi-transparent mode, the game processing module blends or brings in the object from the real-world into the game scene that is being rendered on the display screen of the HMD, to indicate to the user the presence and proximity of the object in the direction of the user's movement. In one example, the size and proximity of the object that is rendered in the display screen may be proportional to the relative proximity of the real-world object to the user. In one embodiment, the real-world object may be rendered in an outline form represented by dotted lines, as illustrated in FIG. 4C. The outline form may be a gray-out form, a broken line form, an outline form, a ghost form, a semi-transparent form or a fully viewable presentation. In one embodiment, in addition to the transition, the game processing module may issue a warning sign to the HMD user when the user moves too close to an object in the room. The warning sign may be in audio format, haptic format, visual format, or any combinations thereof. The game processing module, thus, may be used for safety purposes wherein the system allows a user to have enriching viewing experience while preserving the safety of the user and of the objects in the immediate surrounding environment of the user. In one embodiment, the camera that is used to capture the real-world object may be a depth sensing camera, a regular RGB camera (providing the three basic color components—red, blue, green, on three different wires), or an ultrasonic sensor that can identify proximity, depth and distance of an object from a user, or any combinations thereof.”)
Smith teaches allowing users to select and emphasize physical objects from the real-world scene to be displayed in an immersive environment. Osman further teaches the display style of the physical objects depend on their distances to the users.
It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to have combined the teachings of Smith with the display method of Osman to provide accurate distance information to users to help users avoid the obstacles in an immersive environment.
Regarding claim 2, Smith in view of Osman teaches:
The method of Claim 1, wherein the multiple modes include:
a safety mode in which, when one of the one or more real-world objects is within a first distance from the immersive headset, the representation includes an entirety of the one of the one or more real-world objects;( Osman [0138], “In order to prevent the user from bumping into the object, from being tangled in a wire/cord of the HMD, from getting hurt, or from causing damage to the object, the game processing module in the HMD detects the user's proximity to the object. In response to the user's movement and actions the system may initiate a transition of the display screen of the HMD from non-transparent to semi-transparent mode, e.g., when the user is coming close to an object. In the semi-transparent mode, the game processing module blends or brings in the object from the real-world into the game scene that is being rendered on the display screen of the HMD, to indicate to the user the presence and proximity of the object in the direction of the user's movement. In one example, the size and proximity of the object that is rendered in the display screen may be proportional to the relative proximity of the real-world object to the user.”) a moderate mode in which, when the one of the one or more real-world objects is within a second distance from the immersive headset, the representation includes only an outline of the one of the one or more real-world objects, the first distance shorter than the shorter distance; ( Osman [0138], “In one embodiment, the real-world object may be rendered in an outline form represented by dotted lines, as illustrated in FIG. 4C. The outline form may be a gray-out form, a broken line form, an outline form, a ghost form, a semi-transparent form or a fully viewable presentation”) and a balanced mode in which, when the one of the one or more real-world objects is within the second distance from the immersive headset, the representation includes the entirety of the one of the one or more real-world objects. (Osman [0138], “In order to prevent the user from bumping into the object, from being tangled in a wire/cord of the HMD, from getting hurt, or from causing damage to the object, the game processing module in the HMD detects the user's proximity to the object. In response to the user's movement and actions the system may initiate a transition of the display screen of the HMD from non-transparent to semi-transparent mode, e.g., when the user is coming close to an object. In the semi-transparent mode, the game processing module blends or brings in the object from the real-world into the game scene that is being rendered on the display screen of the HMD, to indicate to the user the presence and proximity of the object in the direction of the user's movement. In one example, the size and proximity of the object that is rendered in the display screen may be proportional to the relative proximity of the real-world object to the user.” Osman teaches presenting the real-world objects in different style based on the distance of the objects to the user. The objects can be displayed in their entirety or/and in outline styles. Although Osman does not explicitly teach the first and second distance, it would have been a design choice for people ordinary skill in the art to specify the distances and associate the display style of the real-world objects to give users clear message about the distances of the real-world objects and help users navigate in the immersive environment.)
Regarding claim 3, Smith in view of Osman teaches:
The method of Claim 1, wherein the multiple modes include: a safety mode in which the representation includes an entirety of one of the one or more real-world objects; ( Osman [0138], “In order to prevent the user from bumping into the object, from being tangled in a wire/cord of the HMD, from getting hurt, or from causing damage to the object, the game processing module in the HMD detects the user's proximity to the object. In response to the user's movement and actions the system may initiate a transition of the display screen of the HMD from non-transparent to semi-transparent mode, e.g., when the user is coming close to an object. In the semi-transparent mode, the game processing module blends or brings in the object from the real-world into the game scene that is being rendered on the display screen of the HMD, to indicate to the user the presence and proximity of the object in the direction of the user's movement. In one example, the size and proximity of the object that is rendered in the display screen may be proportional to the relative proximity of the real-world object to the user.”)
a moderate mode in which the representation includes only an outline of the one of the one or more real-world objects, wherein the one of the one or more real-world objects is within a specified distance from the immersive headset; ( Osman [0138], “In one embodiment, the real-world object may be rendered in an outline form represented by dotted lines, as illustrated in FIG. 4C. The outline form may be a gray-out form, a broken line form, an outline form, a ghost form, a semi-transparent form or a fully viewable presentation”)and
a balanced mode in which the representation includes the entirety of the one of the one or more real-world objects, wherein the one of the one or more real-world objects is within the specified distance from the immersive headset. (Osman [0138], “In order to prevent the user from bumping into the object, from being tangled in a wire/cord of the HMD, from getting hurt, or from causing damage to the object, the game processing module in the HMD detects the user's proximity to the object. In response to the user's movement and actions the system may initiate a transition of the display screen of the HMD from non-transparent to semi-transparent mode, e.g., when the user is coming close to an object. In the semi-transparent mode, the game processing module blends or brings in the object from the real-world into the game scene that is being rendered on the display screen of the HMD, to indicate to the user the presence and proximity of the object in the direction of the user's movement. In one example, the size and proximity of the object that is rendered in the display screen may be proportional to the relative proximity of the real-world object to the user.” Osman teaches presenting the real-world objects in different style based on the distance of the objects to the user. The objects can be displayed in their entirety or/and in outline styles. Although Osman does not explicitly define a specific distance, it would have been a design choice for people ordinary skill in the art to specify the distances and associate the display style of the real-world objects to give users clear message about the distances of the real-world objects and help users navigate in the immersive environment.)
Regarding claim 4, Smith in view of Osman teaches:
The method of Claim 1, further comprising: providing a user interface that allows the user to select the one of the multiple modes.( Osman [0137], “FIG. 4C illustrates the view rendered on the screen as the user 108 continues to walk forward. The game objects (i.e., users) in the interactive scene have been updated to render the objects to correlate with the user's movement. In this embodiment, the interactive scene rendered in the HMD screen shows only one user that is in the related game space (for e.g., user C) that correlates with the distance moved by the user 108. Additionally, the user's continued gaze shift downwards for a period greater than a pre-defined threshold period, will cause the game processing module to bring into focus at least some part of the real-world objects, such as table lamp, table, game console, etc., captured by the external camera. The distance, location, angle, etc., of objects in the game space and the objects from the real world are rendered on the screen of the HMD by taking into consideration the user's forward movement and downward gaze shift as captured by the observation cameras and the external cameras. Thus, the objects (both real and virtual objects) are rendered in a manner that makes the objects appear closer to the user 108. In the example illustrated in FIG. 4C, the HMD screen is considered to have transitioned from non-transparent mode to semi-transparent mode. In some embodiments, the virtual game objects are faded out to the background and the real-world objects are brought into focus (i.e., foreground). In an alternate embodiment, the real-world objects are faded out to the background and the virtual game objects are brought in focus (i.e., foreground).” The combination of claim 1 is incorporated here.)
Regarding claim 6, Smith in view of Osman teaches:
The method of Claim 1, wherein processing the real-time video to identify the one or more real-world objects comprises at least one of: enabling the user to manually place at least one volume shape over at least one of the one or more real-world objects; suggesting at least one volume shape for at least one of the one or more real-world objects to the user; or automatically placing at least one volume shape over at least one of the one or more real-world objects.(Smith, FIG. 6, [0063], “FIG. 6 depicts tracking of user finger movements in the selection view 500 of the real world environment of FIG. 5 to identify the boundaries of real world objects. For example, the user could hold an extended finger 601 of a hand 602 still in a particular position for at least a predetermined amount of time, e.g., 3 seconds. The HDM or other device detects the hand 602 and the extended finger 601 being held steady for more than the threshold amount of time. Based on this, the device begins to track the movement of the extended finger 601 until a condition is satisfied. For example, the device can track the finger until a movement path 603 formed by the extended finger with respect to the real world environment in the live view reconnects with itself, i.e., to complete an enclosure.”)
Regarding claim 7, Smith in view of Osman teaches:
The method of Claim 1, wherein processing the real-time video to identify the one or more real-world objects comprises: providing a user interface that allows the user to modify at least one volume shape placed over at least one of the one or more real-world objects by at least one of translation, rotation, or scaling of the at least one volume shape.( Smith [0024], “Similarly, the user may control the relative position of real world object to aspects of the virtual environment. For example, the user could provide input to change the relative location of the real world desk and other objects to the ocean, e.g., preferring to be 10 feet from the water's edge rather than 50 feet from the water's edge.”)
Regarding claim 8, Smith in view of Osman teaches:
An electronic device comprising: one or more imaging sensors;( Smith [0029], “an HMD is configured with a camera and microphone to capture images and sounds of real world objects and provide a user experience that combines a selection of the real world objects with virtual content.”) at least one display; and at least one processing device (Smith FIG. 10) configured to: the rest of claim 8 recites similar limitations of claim 1, thus are rejected accordingly.
Claim 9-11, 13-14 recite similar limitations of claim 2-4, 6-7 respectively, thus are rejected accordingly.
Regarding claim 15, Smith in view of Osman teaches:
A non-transitory machine readable medium containing instructions that when executed cause at least one processor of an electronic device to (Smith [0078], “The memory 102 and storage 103 can include any suitable non-transitory computer-readable medium. The computer-readable medium can include any electronic, optical, magnetic, or other storage device capable of providing a processor with computer-readable instructions or other program code.”) the rest of claim 15 recites similar limitations of claim 1, thus are rejected accordingly.
Claim 16-17, 19-20 recite similar limitations of claim 2-3, 6-7respectively, thus are rejected accordingly.
Claim(s) 5, 12, 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Smith in view of Osman and further in view of Anwar (US 2002/0011990 A1).
Regarding claim 5, Smith in view of Osman teaches:
The method of Claim 4,
However, Smith in view of Osman does not, but Anwar teaches:
wherein the user interface comprises a slider that allows the user to adjust a degree of displaying the at least one selected real-world object or the at least one selected spatial volume.([0052], “In operation, the slider control 104 can allow the user to adjust the transparency, or alpha figure, of the objects that make up the document 100”)
Smith in view of Osman teaches displaying objects in different transparency. Anwar teaches allow users to control the degree of the transparency.
It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to have combined the teachings of Smith in view of Osman with the specific teachings of Anwar to provide better user machine interface in the method.
Claim 12, 18 recite similar limitations of claim 5, thus are rejected accordingly
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to YANNA WU whose telephone number is (571)270-0725. The examiner can normally be reached Monday-Thursday 8:00-5:30 ET.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Alicia Harrington can be reached at 5712722330. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/YANNA WU/Primary Examiner, Art Unit 2615