DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
This Office Action is in response to the Amendment filed on 11/21/2025.
Status of the Claims:
Claim(s) 21, 30 and 39 has/have been amended.
Claim(s) 21-40 is/are pending in this Office Action.
Response to Arguments
Applicant's arguments filed 11/21/2025 have been fully considered but they are not persuasive.
Applicant argues “The Office Action alleges that Imai discloses "adjust virtual content based on information about the image capture (by further providing for the display of computer- generated (CG) data in relation to the area of interest, see par. [0031]); and blend the adjusted virtual content into the image of the scene captured by the camera to generate a blended image (in such a way as to appear to inhabit the same world as the objects in the area of interest, see par. [0031])." However, the cited portions of Imai do not disclose both an adjustment of virtual content as well as a blending of the adjusted virtual content wherein the tone of the virtual content is modified to blend the adjusted virtual content in the image of the scene. For example, the cited portions of Imai at paragraph 31 state:
Aspects of the invention further provide for the display of computer-generated (CG) data to enhance the viewing experience of the user. This data would be displayed in relation to the area of interest 114 and may include, but are not limited to, informative text, links to additional information, images, etc. The data may be displayed in such a way as to appear to inhabit the same world as the objects in the area of interest 114 and that its associated imaging properties are in agreement with the current imaging properties of the area of interest 114, including such properties as prospective, focus, sharpness, white balance, dynamic range, resolution, brightness and tone mapping.
In the cited portions of Imai, the computer generated data is merely "displayed in such a way as to appear to inhabit the same world", but does not disclose both an adjustment of the virtual content based on information about the image capture and blending of the adjusted virtual content...wherein a tone of the virtual content is modified to blend the adjusted virtual content into the image of the scene. Even assuming, in arguendo, that "displayed in such a way as to appear to inhabit the same world" including changing "prospective, focus, sharpness, white balance, dynamic range, resolution, brightness and tone mapping" may be mapped to “adjust virtual content based on information about the image capture”, the same adjustment may not be mapped also to blend the adjusted virtual content into the image... wherein a tone of the virtual content is modified to blend the adjusted virtual content into the image of the scene. FIG. 3 of the present application, for example, illustrates "apply exposure compensation 310" at the "Renderer" and "Apply tone mapping" at the "Display pipeline 324". Lyons is not relied on as teaching adjustment of virtual content nor blending of the adjusted virtual content. See Office Action at page 4. Applicant respectfully requests that 35 U.S.C. § 103 rejections of claim 21 be removed.”
Examiner respectfully disagrees, the claim as written does not require that the “wherein a tone of the virtual content is modified…” step doesn’t necessarily require that the tone of the “adjusted virtual content” be further modified from the result of the adjusting step, nor that the modification occur during the blending step. Instead, the references meet the limitations by adjusting virtual content based on information about the image capture, wherein the virtual content is modified such that the tone matches the image of the scene and adjusted virtual content is then blended into the image of the scene (i.e. combining the image of the scene and the adjusted virtual content) which results in a blended image with a tone of the virtual content having been modified such that it blends into the scene.
Applicant appears to try the claim: “adjust virtual content based on information about the image capture; and blend the adjusted virtual content into the image of the scene captured by the camera to generate a blended image, wherein the blending of the adjusted virtual content includes modifying a tone of the adjusted virtual content is modified to blend the adjusted virtual content into the image of the scene.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 21-22, 24, 26-32, 34, 36-40 is/are rejected under 35 U.S.C. 103 as being unpatentable over US 2011/0273466 to Imai et al. (hereinafter Imai) in view of US 2013/0258089 to Lyons (hereinafter Lyons).
Regarding independent claim 21, Imai discloses a device comprising:
a camera configured to capture images of a scene (camera 108, see Fig. 3A);
a gaze tracking system (tracks the viewer 106 gaze as shown in area 114, see par. [0029] and Fig. 1); and
a controller comprising one or more processors configured to (at least one processor 20, see par. [0081]):
determine a region of interest in the scene based on gaze tracking information obtained from the gaze tracking system (area 114 is the area of interest to the viewer 106, see par. [0029]);
cause the camera to capture an image of the scene … according to the region of interest (adjustment of the imaging property in the area 114 (see par. 0029]), such as the focus, sharpness, white balance, dynamic range, resolution, brightness and tone mapping of the area of interest (see par. [0030]) and captures the image of the scene from the one or more imaging parameters, see par. [0033]).
adjust virtual content based on information about the image capture (by further providing for the display of computer-generated (CG) data in relation to the area of interest and associated imaging properties are in agreement with the current imaging properties of the area of interest 114, see par. [0031]); and
blend the adjusted virtual content into the image of the scene captured by the camera to generate a blended image, (CG data is displayed in such a way as to appear to inhabit the same world as the objects in the area of interest, see par. [0031]), wherein a tone of the virtual content is modified to blend the adjusted virtual content into the image of the scene (the CG data associated imaging properties are in agreement with the current imaging properties of the area of interest 114, including such properties as prospective, focus, sharpness, white balance, dynamic range, resolution, brightness and tone mapping, see par. [0031]).
However, Imai fails to disclose “to capture an image of the scene auto-exposed to the region of interest”.
Lyons is a similar or analogous system to the claimed invention as evidenced Lyons teaches an imaging system wherein the motivation of properly exposing an object of interest thereby improving image quality would have prompted a predictable variation of Seko by applying Lyons’s known principal of performing auto exposure based on a gaze detection target (paragraph 14 teaches performing auto exposure according to a region of interest based on gaze detection).
In view of the motivations such as properly exposing an object of interest thereby improving image quality one of ordinary skill in the art would have implemented the claimed variation of the prior art system of Imai.
Therefore, the claimed subject matter would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention.
Regarding claim 22, Imai in view of Lyons teaches the device as recited in claim 21, wherein the information about the image capture includes a camera exposure, ambient lighting information for the scene, or an exposure compensation determined from the camera exposure and the ambient lighting information for the scene (adjustment of the imaging property in the area 114 (see par. 0029]), such as the focus, sharpness, white balance, dynamic range, resolution, brightness (hence ambient lighting) and tone mapping of the area of interest, see Imai par. [0030]).
Regarding claim 24, Imai in view of Lyons teaches the device as recited in claim 21, wherein the controller is further configured to apply a tone-mapping technique to the blended image to tone-map the blended image from HDR linear encoding to a dynamic range of a display screen (adjustment of the imaging property in the area 114 (see par. 0029]), such as the focus, sharpness, white balance, dynamic range, resolution, brightness and tone mapping of the area of interest, see Imai par. [0030])).
Regarding claim 26, Imai in view of Lyons teaches the device as recited in claim 21, wherein the device further comprises at least one display screen, and wherein the controller is further configured to cause the blended image to be displayed on the display screen (display 102 shows blended image, see Figs. 6A-B).
Regarding claim 27, Imai in view of Lyons teaches the device as recited in claim 26.
But Imai fails to disclose “wherein the device further comprises left and right optical lenses located between the at least one display screen and eyes of a user of the device”.
However, Lyons teaches “wherein the device further comprises left and right optical lenses located between the at least one display screen and eyes of a user of the device (left and right cameras 42 and 44 located between the display and the user eyes, see Lyons Fig. 7).
References are analogous art because they are from the same field of endeavor and/or are reasonably pertinent to the particular problem with which the applicant was concerned because they relate to gaze tracking systems.
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the above system as taught by Imai, by incorporating the left and right eye cameras as taught by Lyons.
One of ordinary skill in the art would have been motivated to do this modification in order to properly track the user gaze for proper composition in image capturing as suggested by Lyons (see [0010]).
Regarding claim 28, Imai in view of Lyons teaches the device as recited in claim 21.
But Imai fails to disclose “wherein the gaze tracking system comprises:
at least one eye tracking camera; and
one or more light sources configured to emit light towards eyes of a user of the device,
wherein the at least one eye tracking camera captures a portion of the light reflected off the eyes of the user of the device”.
However, Lyons teaches “wherein the gaze tracking system comprises:
at least one eye tracking camera (eye cameras 42 and 44, see Lyons Figs. 6-7); and
one or more light sources configured to emit light towards eyes of a user of the device (infrared light emitter 46 to emit towards user eyes, see Lyons Figs. 6-7),
wherein the at least one eye tracking camera captures a portion of the light reflected off the eyes of the user of the device (infrared light emitters 46 are used to reflect light off the user eyes to track the eyes, see Lyons par. [0031])”.
References are analogous art because they are from the same field of endeavor and/or are reasonably pertinent to the particular problem with which the applicant was concerned because they relate to gaze tracking systems.
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the above system as taught by Imai, by incorporating the left and right eye cameras and infrared light emitters as taught by Lyons.
One of ordinary skill in the art would have been motivated to do this modification in order to properly track the user gaze for proper composition in image capturing as suggested by Lyons (see [0010]).
Regarding claim 29, Imai in view of Lyons teaches the device as recited in claim 21.
But Imai fails to clearly specify “wherein the device is a head-mounted device (HMD)”.
However, Lyons teaches “wherein the device is a head-mounted device (HMD) (glasses mounted unit 64, see Fig. 7)”
References are analogous art because they are from the same field of endeavor and/or are reasonably pertinent to the particular problem with which the applicant was concerned because they relate to gaze tracking systems.
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the above system as taught by Imai, by incorporating the left and right eye cameras and infrared light emitters as taught by Lyons.
One of ordinary skill in the art would have been motivated to do this modification in order to properly track the user gaze for proper composition in image capturing as suggested by Lyons (see [0010]).
Regarding claim(s) 30-32, 34 and 36-38, claim(s) is/are drawn to the method used by the corresponding apparatus in claim(s) 21-22, 24 and 26-29 and is/are rejected for the same reasons used above.
Regarding claim(s) 39-40, claim(s) is/are drawn to the non-transitory computer-readable storage medium used by the corresponding apparatus in claim(s) 21-22 and is/are rejected for the same reasons used above (also Imai teaches the use of a non-transitory computer-readable medium, see pars. [0084-0085].
Claim(s) 23 and 33 is/are rejected under 35 U.S.C. 103 as being unpatentable over Imai in view of Lyons and further in view of US 2007/0189758 to Iwasaki (hereinafter Iwasaki).
Regarding claim 23, Imai in view of Lyons teaches the device as recited in claim 21, but fail to clearly specify “wherein, prior to said blend the adjusted virtual content into the image of the scene captured by the camera to generate a blended image, the controller is configured to apply exposure compensation to a region of the image captured by the camera outside of the region of interest, wherein, in the blended image, the region of interest remains exposed at the camera exposure”.
Iwasaki is a similar or analogous system to the claimed invention as evidenced Iwasaki teaches an imaging device wherein the motivation of improving image quality through exposure compensation would have prompted a predictable variation of Imai in view of Lyons by applying Iwasaki’s known principal of adjusting content exposure based on information about the image capture (figure 4 exhibits steps S109 and S110 in which an exposure correction is determined based on the difference between a proper exposure value of a subject and a highest value as exposure as disclosed at paragraphs 85 and 86); and apply the exposure compensation to the image outside of the region of interest to generate an exposure-compensated image in which the region of interest is exposed at the camera exposure and the image outside of the region of interest is exposed at the scene exposure (figure 4 exhibits step 112 in which an image is exposure compensated by applying the calculated exposure correction as disclosed paragraph 99).
In view of the motivations such as improving image quality through exposure compensation one of ordinary skill in the art would have implemented the claimed variation of the prior art system of Seko.
Therefore, the claimed subject matter would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention.
Regarding claim(s) 33, claim(s) is/are drawn to the method used by the corresponding apparatus in claim(s) 23 and is/are rejected for the same reasons used above.
Claim(s) 25 and 35 is/are rejected under 35 U.S.C. 103 as being unpatentable over Imai in view of Lyons and further in view of US 2014/0341468 to Paris et al. (hereinafter Paris).
Regarding claim 25, Imai in view of Lyons teaches the device as recited in claim 24, however, they fail to disclose “wherein the tone mapping technique includes highlight compression to reveal detail of highlights in the blended image”.
Paris is a similar or analogous system to the claimed invention as evidenced Paris teaches a method for tone mapping wherein the motivation of generating output images that are visually better than images generated using conventional tone mapping would have prompted a predictable variation of Imai in view of Lyons by applying Paris’s known principal of using a tone mapping technique which includes highlight compression to reveal detail of highlights in the exposure-compensated image (paragraph 58 teaches compressing highlights of a base layer so as to leave room for luminance details).
In view of the motivations such as generating output images that are visually better than images generated using conventional tone mapping one of ordinary skill in the art would have implemented the claimed variation of the prior art system of Seko in view of Lyons in view of Iwasaki and further in view of Li.
Therefore, the claimed subject matter would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention.
Regarding claim(s) 35, claim(s) is/are drawn to the method used by the corresponding apparatus in claim(s) 25 and is/are rejected for the same reasons used above.
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13.
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer.
Claim(s) 21-23, 29-30 and 33 is/are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1, 5, 10 and 15 of U.S. Patent No. 11,792,531.
Although the claims at issue are not identical, they are not patentably distinct from each other because claims of the instant application are anticipated by patent claims as shown infra.
Instant Application
Patent 11,792,531
21. A device, comprising:
a camera configured to capture images of a scene;
a gaze tracking system; and
a controller comprising one or more processors configured to:
determine a region of interest in the scene based on gaze tracking information obtained from the gaze tracking system;
cause the camera to capture an image of the scene auto-exposed according to the region of interest;
adjust virtual content based on information about the image capture; and
blend the adjusted virtual content into the image of the scene captured by the camera to generate a blended image, wherein a tone of the virtual content is modified to blend the adjusted virtual content into the image of the scene
30. A method, comprising:
performing, by one or more processors:
determining a region of interest in a scene based on gaze tracking information;
causing a camera to capture an image of the scene at an auto-exposure setting determined from the region of interest; and
adjusting virtual content based on information about the image capture; and
blending the adjusted virtual content into the image of the scene captured by the camera to generate a blended image, wherein a tone of the virtual content is modified to blend the adjusted virtual content into the image of the scene.
1. A system, comprising:
a head-mounted device (HMD) comprising:
a camera configured to capture images of a scene;
a gaze tracking system; and
an ambient light sensor;
a controller comprising one or more processors configured to:
determine a region of interest in the scene based on gaze tracking information obtained from the gaze tracking system;
cause the camera to capture an image of the scene selectively auto-exposed at a camera exposure selected according to the region of interest within the scene of the camera;
determine an exposure compensation based on a difference between the camera exposure determined according to the region of interest within the scene and a scene exposure determined from ambient lighting information for the scene obtained from the ambient light sensor; and
apply the exposure compensation determined based on the difference to only an area of the image outside of the region of interest to generate an exposure-compensated image in which the region of interest is exposed at the camera exposure and the area outside of the region of interest is exposed at the scene exposure.
5. The system as recited in claim 1, wherein the controller is further configured to:
render an image containing virtual content, wherein, to render the image containing virtual content, the controller is configured to apply the exposure compensation to the virtual content so that the image containing virtual content is exposed at the scene exposure;
blend the image containing virtual content into the exposure-compensated image to generate a blended image; and
apply a tone-mapping technique to the blended image to tone-map the blended image from HDR linear encoding to a dynamic range of a display screen; and cause the blended image to be displayed on the display screen.
10. A method, comprising:
performing, by one or more processors:
determining a region of interest in a scene based on gaze tracking information;
causing a camera to capture an image of the scene auto-exposed at a camera exposure determined according to the region of interest within the scene of the camera; and
applying exposure compensation to only an area of the image outside the region of interest only to generate an exposure-compensated image in which the region of interest is exposed at the camera exposure and the area outside of the region of interest is exposed at a scene exposure, wherein the applied exposure compensation is determined based on a difference between the camera exposure determined according to the region of interest and the scene exposure.
+
15. The method as recited in claim 10, further comprising:
rendering an image containing virtual content, wherein rendering the image containing virtual content comprises applying the exposure compensation to the virtual content so that the image containing virtual content is exposed at the scene exposure;
blending the image containing virtual content into the exposure-compensated image to generate a blended image;
applying a tone-mapping technique to the blended image to tone-map the blended image from HDR linear encoding to a dynamic range of a display screen; and causing the blended image to be displayed on the display screen.
Claim 22 of the instant application is encompassed by patent claim 1.
Claim 23 of the instant application is encompassed by patent claim 1.
Claim 29 of the instant application is encompassed by patent claim 1.
Claim 33 of the instant application is encompassed by patent claim 1.
Claims 21, 24-28 and 34-38 rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1, 3-4, 6-7, 10-11 and 15-19 of U.S. Patent No. 11,792,531 (hereinafter ‘531) in view of Imai.
Regarding independent claim 21, ‘531 discloses a device (A system, comprising: a head-mounted device (HIMD), see claim 1) comprising:
a camera configured to capture images of a scene (a camera configured to capture images of a scene, see claim 1);
a gaze tracking system (a gaze tracking system, see claim 1); and
a controller comprising one or more processors configured to (a controller comprising one or more processors, see claim 1):
determine a region of interest in the scene based on gaze tracking information obtained from the gaze tracking system (determine a region of interest in the scene based on gaze tracking information obtained from the gaze tracking system, see claim 1);
cause the camera to capture an image of the scene auto-exposed according to the region of interest (cause the camera to capture an image of the scene selectively auto-exposed at a camera exposure selected according to the region of interest within the scene of the camera).
But ‘531 fails to clearly specify “adjust virtual content based on information about the image capture; and
blend the adjusted virtual content into the image of the scene captured by the camera to generate a blended image, wherein a tone of the virtual content is modified to blend the adjusted virtual content into the image of the scene”.
However, Imai teaches “adjust virtual content based on information about the image capture (by further providing for the display of computer-generated (CG) data in relation to the area of interest, see par. [0031]); and
blend the adjusted virtual content into the image of the scene captured by the camera to generate a blended image, wherein a tone of the virtual content is modified to blend the adjusted virtual content into the image of the scene (in such a way as to appear to inhabit the same world as the objects in the area of interest, see par. [0031] the CG data associated imaging properties are in agreement with the current imaging properties of the area of interest 114, including such properties as prospective, focus, sharpness, white balance, dynamic range, resolution, brightness and tone mapping, see par. [0031])”.
References are analogous art because they are from the same field of endeavor and/or are reasonably pertinent to the particular problem with which the applicant was concerned because they relate to gaze tracking systems.
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the above system as taught by ‘351, by incorporating the adjustment and blending of CG data as taught by Imai.
One of ordinary skill in the art would have been motivated to do this modification in order enhancing the quality and experience of viewing of the image with CG content as suggested by Imai (see [0026]).
Claim 24 of the instant application is encompassed by patent claim 3.
Claim 25 of the instant application is encompassed by patent claim 4.
Claim 26 of the instant application is encompassed by patent claim 6.
Claim 27 of the instant application is encompassed by patent claim 7.
Claim 28 of the instant application is encompassed by patent claim 8.
Claim 31 of the instant application is encompassed by patent claims 11 (depends on claim 10 which is corresponding method of claim 1).
Claim 32 of the instant application is encompassed by patent claims 10 and 11.
Claim 34 of the instant application is encompassed by patent claim 15.
Claim 35 of the instant application is encompassed by patent claim 16.
Claim 36 of the instant application is encompassed by patent claim 17.
Claim 37 of the instant application is encompassed by patent claim 18.
Claim 38 of the instant application is encompassed by patent claim 19.
Claims 39-40 rejected on the ground of nonstatutory double patenting as being unpatentable over claim 20 of U.S. Patent No. 11,792,531 (hereinafter ‘531) in view of Imai.
Regarding claim 39, ‘351 teaches one or more non-transitory computer-readable storage media storing program instructions that when executed on or across one or more processors cause the one or more processors to (One or more non-transitory computer-readable storage media storing program instructions that when executed on or across one or more processors cause the one or more processors to, see claim 20):
determine a region of interest in a scene based on gaze tracking information obtained from a gaze tracking system (determine a region of interest in a scene based on gaze tracking information obtained from a gaze tracking system);
cause a camera to capture an image of the scene auto-exposed according to the region of interest (cause a camera to capture an image of the scene auto-exposed at a camera exposure determined according to the region of interest within the scene of the camera).
But ‘531 fails to clearly specify “adjust virtual content based on information about the image capture; and
blend the adjusted virtual content into the image of the scene captured by the camera to generate a blended image, wherein a tone of the virtual content is modified to blend the adjusted virtual content into the image of the scene”.
However, Imai teaches “adjust virtual content based on information about the image capture (by further providing for the display of computer-generated (CG) data in relation to the area of interest, see par. [0031]); and
blend the adjusted virtual content into the image of the scene captured by the camera to generate a blended image, wherein a tone of the virtual content is modified to blend the adjusted virtual content into the image of the scene (in such a way as to appear to inhabit the same world as the objects in the area of interest, see par. [0031] the CG data associated imaging properties are in agreement with the current imaging properties of the area of interest 114, including such properties as prospective, focus, sharpness, white balance, dynamic range, resolution, brightness and tone mapping, see par. [0031])”.
References are analogous art because they are from the same field of endeavor and/or are reasonably pertinent to the particular problem with which the applicant was concerned because they relate to gaze tracking systems.
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the above system as taught by ‘351, by incorporating the adjustment and blending of CG data as taught by Imai.
One of ordinary skill in the art would have been motivated to do this modification in order enhancing the quality and experience of viewing of the image with CG content as suggested by Imai (see [0026]).
Claim 40 of the instant application is encompassed by patent corresponding system claim 20.
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ANGEL L GARCES-RIVERA whose telephone number is (571)270-7268. The examiner can normally be reached Mon-Fri 9AM-5PM ET.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Sinh Tran can be reached on 571-727-7564. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ANGEL L GARCES-RIVERA/ Examiner, Art Unit 2637
/SINH TRAN/ Supervisory Patent Examiner, Art Unit 2637