DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 9 January 2026 has been entered.
Response to Arguments
Applicant’s arguments with respect to claim(s) 1,9, and 10 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 1,8-10 rejected under 35 U.S.C. 103 as being unpatentable over Michihata; Taihei et al. (US 20180243043 A1) in view of Faulkner; Jeffrey M. et al. (US 20210097776 A1)
Regarding claim 1, Michihata teaches,
An information processing device (¶201,63-64, Fig. 2 and 17, “control device 9c” depicted in fig. 17 similar to “control device 9” depicted in fig. 2) comprising:
one or more memory (¶87 and fig. 2, “storage unit 97” depicted in fig. 2) devices configured to store instructions; (¶87 and fig. 2, storage unit 97 “stores therein a program to be executed by the control unit 94”) and
one or more processors, (¶87 and fig. 2, “control unit 94” depicted in fig. 2) that upon execution of the instructions, (¶87 and fig. 2, “a program to be executed by the control unit 94”) are configured to:
obtain photographed image (¶201,63-64, and Fig. 17, receives “image signals output from the camera head 5” and “parameter calculation unit 945” as depicted in fig. 17) taken by a camera, (¶201,254,63-64,59, and Fig. 17-18, image signals output from the “camera head” 5/5c with imaging unit 54/54B that “captures an image” as depicted in fig. 17)
obtain a photographing parameter (¶159-160 and Fig. 17, “brightness parameter” calculated by “parameter calculation unit 945” as part of “control device” 9/9c depicted in fig. 17) associated with the photographed image, (¶159-160, brightness parameter of “captured image CI” obtained by “imaging performed by the imaging unit 54B”) wherein the photographing parameter (¶159-160, “brightness parameter”) is automatically adjusted by the camera according to a brightness (¶168,159-160,263, and Fig. 17, “control device” 9/9c include “brightness control unit 946 outputs a control signal to the imaging unit 54B” calculated by the ”parameter calculation unit 945” that is configured to “automatically calculate” changing “brightness of the captured image CI”) of a surrounding area (¶157-160, “pixel information” on each pixel in a “detection area” in the captured image CI)
estimate the brightness of the surrounding area (¶172, “brightness parameters” calculated by the parameter calculation unit 945 based on “luminance average value that is detected by the detection processing unit 922B before the brightness change unit 200 changes the brightness of the captured image CI to the reference brightness”) based on the photographing parameter; (¶172, “brightness parameters” calculated before brightness changes of the captured image CI) and
detect a play area (¶201,205, and Fig. 18, subject distance calculation unit 947C “calculates the subject distance DS”) by analyzing the photographed image (¶205 and Fig. 17-18, subject distance calculation unit 947C calculates the subject distance DS by calculating “luminance signals (Y signals) that were present before the brightness change unit 200”) using an analysis condition (¶205 and Fig. 17-18, subject distance calculation unit 947C calculates “luminance signals (Y signals) that were present before the brightness change unit 200 changed the brightness” and acquires “luminance signals (Y signals) obtained after the brightness change unit 200 changed the brightness to the reference brightness”) selected according to the estimated brightness (¶205 and Fig. 17-18, “changed the brightness to the reference brightness according to the brightness parameters calculated” based on acquired “luminance signals (Y signals) among image signals” processed by the image processing unit 921B)
But does not explicitly teach,
a camera mounted on a head-mount display
wherein the photographing parameter is automatically adjusted according to a brightness of a surrounding area of the head-mounted display;
acquire three-dimensions information regarding an obstacle; and
detects a play area defining a movable range of a user free from collision with the obstacle, wherein the play area comprises at least a portion of a floor surface external to the user.
However, Faulkner teaches additionally,
a camera mounted on a head-mount display (¶42,67,70, and fig. 3, Head-mounted device “HMD 120” including one or more forward-facing “exterior-facing sensors 314” to obtain image data that corresponds to the scene as would be viewed by the user if the HMD 120)
wherein the photographing parameter is automatically adjusted (¶176 and 263, “simulated illumination on the physical objects and virtual objects in the three-dimensional environment are adjusted” in accordance to “movement of the user's head (or the HMD) relative to the physical world”) according to a brightness of a surrounding area of the head-mounted display; (¶176 and 263, adjusted because of “spatial relationships between the “representations of the physical objects” and virtual objects in the “three-dimensional environment” changed in response to “movement” that results in “furniture representation 7310′ is more illuminated”)
estimate the brightness of the surrounding area (¶208, “generate overlay for the floor surface” according to “the amount of light, the color of the light, as well as the direction of the light” coming from different simulated illumination on the floor surface) based on the photographing parameter; (¶208, generate overlay for the floor surface “that simulate the different amount, color, and direction of illuminations in different portions”)
acquire three-dimensions information regarding an obstacle; (¶140-141 and fig. 7I, “three-dimensional environment” including “view of furniture 7310 in front of front wall 7304” shown in the three-dimensional environment depicted in fig. 7I) and
detects a play area (¶140-141 and fig. 7I, “floor representation 7308’” of floor 7308 “visible in the three-dimensional environment after the virtual elements 7402” depicted in fig. 7I) defining a movable range of a user free from collision with the obstacle, (¶140-141 and fig. 7I, “view of furniture 7310 standing in front of front wall 7304” both blocking a “portion of the floor representation 7308’” of floor 7308 depicted in fig. 7I) wherein the play area comprises at least a portion of a floor surface external to the user. (¶140-141 and fig. 7I, floor “representation 7308′ of floor 7308” depicted in fig. 7I generated as viewing perspective displayed on device 7100 changed with respect to physical surfaces and objects” such as “floor” and furniture)
It would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention to combine the control device of Michihata with the display environment of Faulkner which modifies the illumination of a viewing perspective with clearly identifiable physical surfaces such as a floor. This allows for the generation of augmented reality environments which can be superimposed over a view of a physical environment visible through the display.
Regarding claim 8, Michihata with Faulkner teach the limitations of claim 1,
Michihata teaches additionally,
obtain, (¶201,184, and Fig. 17, “control device 9c” includes a “parameter calculation unit 945 calculates the brightness parameters” depicted in Fig. 17) as the photographing parameter, at least any one of an exposure time, an analog gain, and a digital gain. (¶184 and Fig. 17, parameter calculation unit 945 calculates “brightness parameters (the exposure time, the analog gain, the digital gain, and the amount of light)”)
Regarding claim 9, it is the method claim of device claim 1. Refer to rejection of claim 1 to teach the limitations of claim 9.
Regarding claim 10, it is the computer system claim of device claim 1.
Michihata teaches additionally,
A non-transitory computer-readable storage medium (¶87 and fig. 2, storage unit 97 “stores therein a program to be executed by the control unit 94”) comprising a program for a computer, (¶44,87, and 73, “control device 9” includes a “central processing unit (CPU)” and “storage unit 97 stores therein a program to be executed by the control unit 94” using a CPU) which when executed by the computer (¶87 and fig. 2, “control unit 94” depicted in fig. 2) causes the computer to perform one or more operations (¶87 and fig. 2, storage unit 97 “stores therein a program to be executed by the control unit 94”)
Refer to rejection of claim 1 to teach the limitations of claim 10.
Claim(s) 2,3 rejected under 35 U.S.C. 103 as being unpatentable over Michihata; Taihei et al. (US 20180243043 A1) in view of Faulkner; Jeffrey M. et al. (US 20210097776 A1) in view of YANG; Kang et al. (US 20190180461 A1)
Regarding claim 2, Michihata with Faulkner teach the limitations of claim 1,
Michihata teaches additionally,
changing an intensity of a filter (¶168,205, and fig. 17, brightness control unit 946 “sets the exposure time of each of the pixels of the imaging element 541 to the exposure time (brightness parameter) calculated by the parameter calculation unit 945” as change to “luminance signals” obtained after brightness change to “reference brightness”) for determining validity of the corresponding point, (¶186,205,168,207, and fig. 17, distance determination unit 942B “compares the subject distance DS” calculated at step S17c, using “luminance signals” obtained after brightness change set using “exposure time (brightness parameter) calculated by the parameter calculation unit 945”, with the “reference distance” to determine whether “subject distance DS is within the reference distance”) according to the photographing parameter. (¶168,205, and fig. 17, sets the exposure time to the “exposure time (brightness parameter) calculated by the parameter calculation unit 945”)
But does not explicitly teach the additional limitations of claim 2,
However, Yang teaches additionally,
extract a corresponding point (¶34,37, Fig. 1 and 2, “obtain depth information” of “an object” by matching “the first image 221 with the second image 222”) by:
performing block matching on a stereo image (¶34,37, and fig. 2, “matching the first image 221 with the second image 222 using block matching” using an exemplary block configuration 310 that is circular where “first block 341 that includes a first point 241 is defined on the first image 221” and “second block 342 that includes the second point 242” in second image 222) including the photographed image; (¶34, 37, and fig. 2, images of “an object 230”)
It would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention to combine the control device of Michihata with the display environment of Faulkner with the matching of Yang which uses a block matching to compare and contrast pixels of blocks between two images. The technique can advantageously improve the accuracy of a block matching.
Regarding claim 3, Michihata with Faulkner with Yang teach the limitations of claim 2,
Yang teaches additionally,
in response to a change of the intensity of the filter, (¶55-56 and Fig. 7, “plurality of candidate second points 282a . . . 282m can be selected for evaluation” selected in a predefined neighborhood about the coordinates of the first point 241) adjust a range (¶34,55-56, fig. 1 and 7, “stereoscopic imaging apparatus 300 can select a plurality of block configurations 310 for block matching” including parameters relating to size or shape the “one or more blocks” as a ”defined range”) and an interval between pixels (¶34,55-56, fig. 1 and 7, candidate second points 282a . . . 282m selected “pixels within the defined range” can be sampled “randomly or by interval sampling”) from which a tendency of a similarity obtained (¶55-56 and Fig. 7, “matching cost c” of the sampled pixels within the defined range) as a result of the block matching is checked. (¶55-56,74,76, Fig. 7 and 3, “matching cost c” for sampled pixels within the defined range selected as “matching points between two images” as a quantified similarity “between the pixels in the blocks” of the first image 221 and second image 222 so that “candidate second point 282 associated with the best individual matching cost” can be selected)
It would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention to combine the control device of Michihata with the display environment of Faulkner with the matching of Yang which uses a block matching to compare and contrast pixels of blocks between two images. The technique can advantageously improve the accuracy of a block matching.
Claim(s) 4 rejected under 35 U.S.C. 103 as being unpatentable over Michihata; Taihei et al. (US 20180243043 A1) in view of Faulkner; Jeffrey M. et al. (US 20210097776 A1) in view of KORJUS; Kristjan et al. (US 20210397187 A1)
Regarding claim 4, Michihata with Faulkner teaches the limitations of claim 1,
But does not explicitly teach the additional limitations of claim 4,
However, Korjus teaches additionally,
change a condition of detecting a floor surface candidate (¶254-256, “adjusting of the at least one detection threshold 43” can be based on at least one threshold parameter 60 such as “whether the environment comprises a road, road crossing, sidewalk, driveway”) according to the photographing parameter (¶256, “detector or threshold parameters 60”) on a basis of a distribution of points in a three-dimensions space, (¶254-256,201, and Fig. 3, detector or threshold parameters 60 related to when “an object 20 is detected” based on a “detector apparatus 30” with time-of-flight (ToF) camera “sensor device 32” sensing environment “detection area 33” depicted in fig. 3) corresponding to feature points in the photographed image. (¶201 and 204, sensor device 32 sensing detection area 33 for “detection of an object 20 and/or determination of the object's size, type and/or distance to the object 20”)
It would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention to combine the control device of Michihata with the display environment of Faulkner with the threshold adjustment of Korjus which can adjust the sensitivity based on environmental parameters. This allows for determining of an optimal threshold based on the context that can provide insight into the environment.
Claim(s) 5 rejected under 35 U.S.C. 103 as being unpatentable over Michihata; Taihei et al. (US 20180243043 A1) in view of Faulkner; Jeffrey M. et al. (US 20210097776 A1) in view of KORJUS; Kristjan et al. (US 20210397187 A1) in view of Huang; Ziqiang et al. (US 20210004610 A1)
Regarding claim 5, Michihata with Faulkner with Korjus teach the limitations of claim 4,
But does not explicitly teach the additional limitations of claim 5,
However, Huang teaches additionally,
change, as the detection condition, (¶183, threshold used to “separate ground points from non-ground points”) a threshold value (¶183, “threshold for ground paint” selected automatically) to be given to a histogram of a number of points in a gravity direction, (¶183, “histogram of ground point intensity” updated using moving frame separated into “ground points and non-ground points”) according to the photographing parameter. (¶183, “ground point intensity” of the “ground points” in the moving frame)
It would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention to combine the control device of Michihata with the display environment of Faulkner with the threshold adjustment of Korjus with the ground intensity profiling of Huang which automatically updates the ground intensity threshold. This allows for a technique that does not need to stop the vehicle at predefined locations to provide automatic calibration.
Claim(s) 6 rejected under 35 U.S.C. 103 as being unpatentable over Michihata; Taihei et al. (US 20180243043 A1) in view of Faulkner; Jeffrey M. et al. (US 20210097776 A1) in view of Saneyoshi; Keiji et al. (US 5410346 A)
Regarding claim 6, Michihata with Faulkner teach the limitations of claim 1,
But does not explicitly teach the additional limitations of claim 6,
However, Saneyoshi teaches additionally,
change a rule to derive an outline of the play area (15:9-15, extract only white lines on an actual road to “modify/change parameters of a road model”) according to the photographing parameter (15:9-15 and 14:16-43, linear element detecting section 113 detecting “three-dimensional linear elements constituting a road model” from information of the distance distribution “included in the distance picture” constituting a road model) on a basis of a distribution of points on the floor surface (14:16-43, road shape estimation section 111 estimate the “position of a white line and the shape of a road on the basis of information of a distance distribution included in the distance picture”) in a three-dimensions space, (14:16-43, three-dimensional window generating section 112 setting “a three-dimensional space area including the estimated white line on the” based on road shape estimated from the distance picture used to extract “three-dimensional linear elements constituting a road model”) corresponding to feature points in the photographed image. (15:9-15 and 14:16-43, “white lines on an actual road”)
It would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention to combine the control device of Michihata with the display environment of Faulkner with the model changing of Saneyoshi which can change parameters based on the detected position estimates of road objects. This allows for a technique of image monitoring allows for detections that are precise and have high reliability when detecting road shapes.
Claim(s) 7 rejected under 35 U.S.C. 103 as being unpatentable over Michihata; Taihei et al. (US 20180243043 A1) in view of Faulkner; Jeffrey M. et al. (US 20210097776 A1) in view of Saneyoshi; Keiji et al. (US 5410346 A) in view of Ishibashi; Takuya (US 20200211212 A1)
Regarding claim 7, Michihata with Faulkner with Saneyoshi teach the limitations of claim 6,
But does not explicitly teach the additional limitations of claim 7,
However, Ishibashi teaches additionally,
change, (¶76, “shape composition processing” includes processing by “changing the magnification α applied” when shape information is updated using information of subject distances) as the rule for deriving the outline of the play area, (¶76-77, shape composition unit 110 “determines the magnification α accordance with the distribution of subject distances in a segment region”) an α value (¶76-77, “magnification α”) in an alpha shape method (¶76-77, “magnification α” independently applied to each segment region in the updating of shape information) according to the photographing parameter. (¶76, changing the magnification α applied when “shape information is updated using information of subject distances”)
It would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention to combine the control device of Michihata with the display environment of Faulkner with the model changing of Saneyoshi with the shape composition of Ishibashi which can be updated using subject distances. This allows for a process that can improve efficiency of the computation cost by making the derivation of magnification for each region unnecessary.
Claim(s) 11-12,14,16-17 rejected under 35 U.S.C. 103 as being unpatentable over Michihata; Taihei et al. (US 20180243043 A1) in view of Faulkner; Jeffrey M. et al. (US 20210097776 A1) in view of Ikenoue; Shoichi et al. (US 20180318704 A1)
Regarding claim 11, Michihata with Faulkner teaches the limitations of claim 1,
But does not explicitly teach the additional limitations of claim 11,
However, Ikenoue teaches additionally,
subsequent to detecting the play area defining the movable range of the user: (¶59-60, “boundary surfaces in the vertical direction of the field of view are used as the criteria for setting the boundary surfaces of a play area 184” on the basis of predetermined rules)
determine that the user is outside of the play area (¶59-60, “user is found outside of the play area 184”) during an execution of a software application. (¶59-60, warning state determining section 82 determining the need for a warning when determining “user is found outside of the play area 184”)
It would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention to combine the control device of Michihata with the display environment of Faulkner with the information processing of Ikenoue which can identify when a user is found outside the play area. The technique balances accuracy of information processing with an agreeable operating environment.
Regarding claim 12, Michihata with Faulkner with Ikenoue teaches the limitations of claim 11,
Ikenoue teaches additionally,
in response to determining that the user is outside the play area (¶59-60, “user is found outside of the play area 184”) during the execution of the software application, (¶59-60, warning state determining section 82 determining the need for a warning when determining “user is found outside of the play area 184”) transmit, to the head-mount display, (¶44 and 59-60, “HMD 18” that receives output data “warning on the basis of the user’s position” when the user is found “outside of the play area 184”) an alarm prompting the user to return to the play area, (¶59, “warning state determining section 82 determines that there is a need for a warning”) wherein the head-mount display is configured to display the alarm to the user. (¶44,29, and 59-60, “HMD 18” that receives output data “presents the user wearing it with images on a display panel” that provides “warning on the basis of the user’s position” when the user is found “outside of the play area 184”)
It would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention to combine the control device of Michihata with the display environment of Faulkner with the information processing of Ikenoue which can identify when a user is found outside the play area. The technique balances accuracy of information processing with an agreeable operating environment.
Regarding claim 14, Michihata with Faulkner teaches the limitations of claim 1,
But does not explicitly teach the additional limitations of claim 14,
However, Ikenoue teaches additionally,
subsequent to detecting the play area defining the movable range of the user: (¶59-60, “boundary surfaces in the vertical direction of the field of view are used as the criteria for setting the boundary surfaces of a play area 184” on the basis of predetermined rules)
determine that the user is approaching a boundary of the play area (¶59-60 and 51, user is found outside of the play area 184 such as “when the user is about to go out of the angle of view”) during an execution of a software application; (¶59-60 and 51, warning state determining section 82 determining the need for a warning when determining user is “about to go out of the angle of view”) and
in response to determining that the user is approaching the boundary of the play area (¶59-60 and 51, “user is found outside of the play area 184” such as “when the user is about to go out of the angle of view”) during the execution of the software application, (¶59-60 and 51, warning state determining section 82 determining the need for a warning when determining “user is about to go out of the angle of view”) transmit, to the head-mount display, (¶44,59-60 and 51, “HMD 18” that receives output data “warning on the basis of the user’s position” when the user is found “user is about to go out of the angle of view”) an alarm to notify the user, (¶59 and 51, “warning state determining section 82 determines that there is a need for a warning” such as when the “user is about to go out of the angle of view”) wherein the head-mount display is configured to display the alarm to the user. (¶44,29,59-60, and 51, “HMD 18” that receives output data “presents the user wearing it with images on a display panel” that provides “warning on the basis of the user’s position” such as when the “user is about to go out of the angle of view”)
It would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention to combine the control device of Michihata with the display environment of Faulkner with the information processing of Ikenoue which can identify when a user is found outside the play area. The technique balances accuracy of information processing with an agreeable operating environment.
Regarding claim 16, Michihata with Faulkner teach the limitations of claim 1,
But does not explicitly teach the additional limitations of claim 16,
However, Ikenoue teaches additionally,
play area (¶50 and fig. 7, “play area” depicted in fig. 7) defines the movable range of the user (¶50, determination on a play area set used to determine that “a warning is needed if the user is found outside of the play area”) during an execution of a software application (¶50, “warning state determining section 82 performs inside/outside determination on a play area”) involving image display via the head-mount display. (¶85 and 87, “determination that the user is out of the play area is made when the markers attached to the HMD 18 worn by the user or the markers on the input device 14 held by the user are found outside of the play area” so that output “superposes a warning image on the content image”)
It would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention to combine the control device of Michihata with the display environment of Faulkner with the information processing of Ikenoue which can identify when a user is found outside the play area. The technique balances accuracy of information processing with an agreeable operating environment.
Regarding claim 17, Michihata with Faulkner teach the limitations of claim 1,
Faulkner teaches additionally,
wherein the floor surface (¶122-123 and figs. 7, “floor 7308” depicted in figs. 7) is orthogonal to a gravity direction. (¶122-123 and figs. 7, “floor 7308” as corresponding to the lower surface “floor representation 7308’” displayed on device 7100 that physical objects such as “furniture 7310”, “front wall 7304, and side wall 7306” limit or take portion of as depicted in figs. 7)
But does not explicitly teach the additional limitations of claim 17,
However, Ikenoue teaches additionally,
the floor surface (¶50, “play area” set in a “three-dimensional space of the real world”) comprises a continuous region having an area equal to or exceeding a predefined value, (¶50,59-60 and fig. 7, “area set as the play area in the real world” using criteria for setting the “boundary surfaces of a play area 184 on the basis of predetermined rules” inside the “boundary surfaces 182a and 182b of the field of view” and “set to have a constant width W” as depicted in fig. 7)
It would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention to combine the control device of Michihata with the display environment of Faulkner with the information processing of Ikenoue which can identify when a user is found outside the play area. The technique balances accuracy of information processing with an agreeable operating environment.
Claim(s) 13 rejected under 35 U.S.C. 103 as being unpatentable over Michihata; Taihei et al. (US 20180243043 A1) in view of Faulkner; Jeffrey M. et al. (US 20210097776 A1) in view of Ikenoue; Shoichi et al. (US 20180318704 A1) in view of Takemoto; Kazuki (US 20220230358 A1)
Regarding claim 13, Michihata with Faulkner with Ikenoue teaches the limitations of claim 11,
Ikenoue teaches additionally,
in response to determining that the user is outside the play area (¶59-60, “user is found outside of the play area 184”) during the execution of the software application, (¶59-60, warning state determining section 82 determining the need for a warning when determining “user is found outside of the play area 184”) transmit, to the head-mount display, (¶44 and 59-60, “HMD 18” that receives output data “warning on the basis of the user’s position” when the user is found “outside of the play area 184”) an alarm image prompting the user to return to the play area, (¶59, “warning state determining section 82 determines that there is a need for a warning”) wherein the head-mount display is configured to display the image to the user. (¶44,29, and 59-60, “HMD 18” that receives output data “presents the user wearing it with images on a display panel” that provides “warning on the basis of the user’s position” when the user is found “outside of the play area 184”)
but does not explicitly teach,
transmit an image of a surrounding real space with respect to the user,
However, Takemoto teaches additionally,
transmit an image of a surrounding real space with respect to the user, (¶29, “generating a composite image in which a whole or a part of a real space image captured by the image capturing apparatus is superimposed on the CG virtual space image”)
Ikenoue discloses warning a user of their positional location which warns a user possibly being outside of a play area. Takemoto is important to disclose that a real space image captured by an image capturing apparatus can be superimposed on a computer-generated virtual space image. When combined they can create a warning message similar to what is being claimed, where a warning presents a superimposed real space image. It would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention to combine the control device of Michihata with the display environment of Faulkner with the information processing of Ikenoue with the superimposing of Takemoto which combines a virtual space image with a real space image. This allows for generating a mixed reality that adds to what user interactions are available.
Claim(s) 15,18-19 rejected under 35 U.S.C. 103 as being unpatentable over Michihata; Taihei et al. (US 20180243043 A1) in view of Faulkner; Jeffrey M. et al. (US 20210097776 A1) in view of LeBeau; Michael James et al. (US 20220086205 A1)
Regarding claim 15, Michihata with Faulkner teaches the limitations of claim 1,
But does not explicitly teach the additional limitations of claim 15,
However, LeBeau teaches additionally,
transmit, to the head-mount display, (¶35,84, and fig. 6A, “extra reality” XR work system 600 depicted in fig. 6A) an image indicating a boundary of the play area; (¶84 and fig. 6A, the XR work system depicted in fig. 6A displayed “a dedicated space 602” that the user manipulates using “users hand positions”)
receive a user operation providing instructions to modify the play area; (¶84 and fig. 6A, “monitoring a user's hand positions to control corners 604 and 606” that specifies “a square of dedicated space” as depicted in fig. 6A) and
adjust the boundary of the play area based on the user operation. (¶84 and fig. 6A, “user places these corners to specify a square of dedicated space and puts it at the height of the surface of his desk”)
It would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention to combine the control device of Michihata with the display environment of Faulkner with the adjustment of LeBeau where a user can adjust the work environment. This allows for the accommodation for a different number of users and working goals.
Regarding claim 18, Michihata with Faulkner teaches the limitations of claim 1,
But does not explicitly teach the additional limitations of claim 18,
However, LeBeau teaches additionally,
generate a play area image (¶84 and fig. 6A-6B, “creating an artificial reality working environment” linked to one or more real-world surfaces) comprising a boundary surface part (¶84 and fig. 6A-6B, “a dedicated space 602”) indicating a surface orthogonally intersecting with the play area. (¶84 and fig. 6A-6B, “dedicated space 602 on his physical desk 608” as depicted in fig. 6A)
It would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention to combine the control device of Michihata with the display environment of Faulkner with the adjustment of LeBeau where a user can adjust the work environment. This allows for the accommodation for a different number of users and working goals.
Regarding claim 19, Michihata with Faulkner with LeBeau teaches the limitations of claim 18,
LeBeau teaches additionally,
generate an updated image by superimposing the play area image (¶84 and fig. 6A-6B, “monitoring a user's hand positions to control corners 604 and 606” that specifies a square of dedicated space) on an image of a surrounding real space with respect to the user; (¶84 and fig. 6A, user places corners to “specify a square of dedicated space and puts it at the height of the surface of his desk” as part of an artificial reality working environment as depicted in fig. 6A) and
transmit the updated image to the head-mount display for presentation to the user. (¶84 and fig. 6A-6B, Once the dedicated space 602 has been specified, “default anchor point for screen 652 and control and notification bars 654, 656, and 658 are set while an anchor point for MR keyboard 660 is set” as depicted in fig. 6B)
It would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention to combine the control device of Michihata with the display environment of Faulkner with the adjustment of LeBeau where a user can adjust the work environment. This allows for the accommodation for a different number of users and working goals.
Claim(s) 20 rejected under 35 U.S.C. 103 as being unpatentable over Michihata; Taihei et al. (US 20180243043 A1) in view of Faulkner; Jeffrey M. et al. (US 20210097776 A1) in view of Venkataraman; Kartik et al. (US 20160309134 A1)
Regarding claim 20, Michihata with Faulkner teaches the limitations of claim 1,
But does not explicitly teach the additional limitations of claim 20,
However, Venkataraman teaches additionally,
head-mount display (¶87 and 39, render the virtual object on “a transparent display through which a viewer can see the virtual object overlaid on the scene” using the hardware of a set of cameras mounted within a “headset that includes a display via which images can be displayed”) obscures at least a portion of a surrounding real space from a viewing field of the user. (¶87, “image being captured wherein the virtual object is the focal point of the capture and the background is appropriately defocused/de-emphasized” within an AR/VR/MR context)
It would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention to combine the control device of Michihata with the display environment of Faulkner with the focus of Venkataraman which emphasizes a virtual object and de-emphasizes a background. This allows for synthesizing of an image that can modify the specific image to process that can utilize pose estimation that can improve accuracy for AR/VR systems.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JIMMY S LEE whose telephone number is (571)270-7322. The examiner can normally be reached Monday thru Friday 10AM-8PM EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Joseph G. Ustaris can be reached at (571) 272-7383. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JOSEPH G USTARIS/Supervisory Patent Examiner, Art Unit 2483
/JIMMY S LEE/Examiner, Art Unit 2483