DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
Applicant’s arguments with respect to claims 1-20 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 7-11 and 17-20 and 24 are rejected under 35 U.S.C. 103 as being unpatentable over US 2020/0134336 to Kogure et al.; in view of US 2024/0013501 to Sasayama et al.; in view of US 2018/0136720 to Spitzer et al..
As per claim 1, Kogure et al. teach a method at an electronic device, the method comprising:
obtaining sensed data representing vehicle motion (Fig. 4, 402, paragraph 37, “The vehicle information serves as information indicating a driving state of the vehicle 1, such as a steering amount of the steering portion 4 detected by the steering angle sensor 19 and a speed of the vehicle”);
obtaining data for determining a point of focus on a display (Fig. 4, 400/403/404/405, paragraph 6, “detecting an object having saliency in the top-down saliency map among the recognized objects as a candidate for a visual confirmation target at which the driver looks, and a visual confirmation target determination portion determining a visual confirmation target at which the driver looks on a basis of an extraction result of the gaze region and a detection result of the candidate for the visual confirmation target”).
Kogure et al. does not necessarily teach defining a foveal region of the display and a peripheral region of the display, the foveal region being defined to be a region immediately surrounding and encompassing the point of focus, and the peripheral region being defined to be an annular region outside of the foveal region and surrounding the foveal region, wherein at least edge regions of the display are outside of the peripheral region; and providing a visual output, via the display, wherein providing the visual output comprises providing visual cues for mitigation of motion sickness that exhibit movement in accordance with the real-time vehicle motion, the visual cues being provided in the peripheral region and being excluded from the foveal region.
Sasayama et al. teach defining a foveal region of the display (Fig. 13, 21) and a peripheral region of the display (Fig. 13, 22), the foveal region being defined to be a region immediately surrounding and encompassing the point of focus (paragraph 34, “a user 30 viewing a gaze region 21”), and the peripheral region being defined to be an annular region outside of the foveal region and surrounding the foveal region (Fig. 13, region surrounding 21), wherein at least edge regions of the display are outside of the peripheral region (Fig. 13, region outside of 22); and providing a visual output, via the display, wherein providing the visual output comprises providing visual cues (Fig. 13, 22b) for mitigation of motion sickness (paragraph 73, “motion sickness regulation image 22b”) that exhibit movement in accordance with the real-time vehicle motion (Figs. 16A-17B, paragraph 73, “the movement of a motion sickness regulation image 22b displayed on (e.g., projected onto) the windshield 40 (when the vehicle 41 is traveling forward)”, paragraph 75, “movement of the motion sickness regulation image displayed on the windshield 40 (when the vehicle 41 turns left)”), the visual cues being provided in the peripheral region and being excluded from the foveal region (Fig. 13, 22b, paragraph 34, “display a motion sickness regulation image (i.e., stimulus image) 22a regulating the level of the motion sickness occurring to a user 30 viewing a gaze region”).
It would have been obvious to one of ordinary skill in the art, to modify the device of Kogure et al., by defining a foveal region of the display and a peripheral region of the display, the foveal region being defined to be a region immediately surrounding and encompassing the point of focus, and the peripheral region being defined to be an annular region outside of the foveal region and surrounding the foveal region, wherein at least edge regions of the display are outside of the peripheral region; and providing a visual output, via the display, wherein providing the visual output comprises providing visual cues for mitigation of motion sickness that exhibit movement in accordance with the real-time vehicle motion, the visual cues being provided in the peripheral region and being excluded from the foveal region, such as taught by Sasayama et al., for the purpose of reducing driver distraction during operation of a vehicle.
Kogure and Sasayama et al. do not teach wherein providing the visual output comprises providing the visual output with higher resolution in the foveal region and lower resolution outside of the foveal region, the visual cues being provided in the lower resolution.
Spitzer et al. teach wherein providing the visual output comprises providing the visual output with higher resolution in the foveal region and lower resolution outside of the foveal region, the visual cues being provided in the lower resolution (paragraph 24, “in some embodiments, the resolution of a peripheral region may be based at least in part on the distance of that region from the foveal region”).
It would have been obvious to one of ordinary skill in the art, to modify the device of Kogure and Sasayama et al., by providing the visual output comprises providing the visual output with higher resolution in the foveal region and lower resolution outside of the foveal region, the visual cues being provided in the lower resolution, such as taught by Spitzer et al., for the purpose of reducing bandwidth and processing load.
As per claim 7, Kogure, Sasayama and Spitzer et al. teach the method of claim 6, wherein the visual output outside of the foveal region has a resolution that progressively decreases with distance from the foveal region (Spitzer, paragraph 24, “paragraph 24, “in some embodiments, the resolution of a peripheral region may be based at least in part on the distance of that region from the foveal region”).
As per claim 8, Kogure, Sasayama and Spitzer et al. teach the method of claim 1, wherein the visual cues are provided in only a portion of the peripheral region (Sasayama, Fig. 17B).
As per claim 9, Kogure, Sasayama and Spitzer et al. teach the method of claim 1, further comprising: determining an updated point of focus (Sasayama, Fig. 17A shows gaze focused on a straight road, Fig. 17B shows gaze focused on turn) at an updated location on the display; updating the foveal region and the peripheral region relative to the updated location of the updated point of focus; and providing an updated visual output, via the display, with the visual cues being provided in the updated peripheral region and being excluded from the updated foveal region (Sasayama, Figs. 17A-17B).
As per claim 10, Kogure, Sasayama and Spitzer et al. teach the method of claim 9, wherein updating the foveal region and the peripheral region includes changing at least one of a size parameter, a shape parameter, or a centeredness parameter of at least one of the foveal region or the peripheral region (Sasayama, Figs. 17A and 17B, the relative positioning and clustering is modified).
As per claim 11, it comprises similar limitations to those in claim 1 and it is therefore rejected for similar reasons.
As per claim 17, it comprises similar limitations to those in claim 7 and it is therefore rejected for similar reasons.
As per claim 18, it comprises similar limitations to those in claim 9 and it is therefore rejected for similar reasons.
As per claim 19, Kogure, Sasayama and Spitzer et al. teach the electronic device of claim 11, wherein the electronic device is one of: a smartphone; a laptop; a tablet; an in-vehicle display device (Sasayama, Fig. 13); a head mounted display device; an augmented reality device; or a virtual reality device.
As per claim 20, it comprises similar limitations to those in claim 1 and it is therefore rejected for similar reasons.
As per claim 24, it comprises similar limitations to those in claim 7 and it is therefore rejected for similar reasons.
Claims 2-4 and 12-14 and 21-23 are rejected under 35 U.S.C. 103 as being unpatentable over US 2020/0134336 to Kogure et al.; in view of US 2024/0013501 to Sasayama et al.; in view of US 2018/0136720 to Spitzer et al.; further in view of US 2023/0333642 to Chimalamarri et al.; further in view of US 2022/0092331 to Stoppa et al.
As per claim 2, Kogure, Sasayama and Spitzer et al. teach the method of claim 1, wherein the data for determining the point of focus is one of: data representing a location of a detected gaze on the display, wherein the point of focus is determined to be the location of the detected gaze on the display; data representing a salient region of an image to be outputted on the display (Kogure, paragraph 6, “detecting an object having saliency in the top-down saliency map among the recognized objects as a candidate for a visual confirmation target at which the driver looks, and a visual confirmation target determination portion determining a visual confirmation target at which the driver looks on a basis of an extraction result of the gaze region and a detection result of the candidate for the visual confirmation target”), data representing a location of a touch input on the display, wherein the point of focus is determined to be the location of the touch input on the display; or data representing a location of a mouse input on the display, wherein the point of focus is determined to be the location of the mouse input on the display.
Kogure, Sasayama and Spitzer et al. do not explicitly teach wherein the point of focus is determined based on the salient region.
Chimalamarri et al. teach wherein the point of focus is determined based on the salient region (paragraph 34, “the electronic device 20 may set a position (e.g., a center) of the third visual element 50c as the expected gaze position 72” Figs. 1D-1E, paragraph 38, “the electronic device 20 identifies a difference 90 between the expected gaze position 72 and the measured gaze position 82 … The new value 32 can compensate for the eye not being at the center of the field-of-view of the image sensor 24. The electronic device 20 replaces the first value 30 with the new value 32”, in other words, a gaze point is defined/assumed to be the center of a salient region).
It would have been obvious to one of ordinary skill in the art, to modify the device of Kogure, Sasayama and Spitzer et al., so that the focus is determined based on a bounding box of the salient region, such as taught by Chimalamarri et al., for the purpose of improving gaze detection accuracy.
Kogure, Sasayama, Spitzer and Chimalamarri et al. do not explicitly teach wherein the salient region is identified by a bounding box.
Stoppa et al. teach wherein the salient region is identified by a bounding box (Fig. 1, paragraphs 32-33, “a saliency heatmap, such as exemplary saliency heatmap 110 in FIG. 1, may be utilized to identify salient portions of the image, e.g., in the form of generated bounding box(es), wherein the bounding boxes may be configured to enclose the salient objects and/or regions in an image where a user's attention or eye gaze is most likely to be directed when looking at the image”).
It would have been obvious to one of ordinary skill in the art, to modify the device of Kogure, Sasayama, Spitzer and Chimalamarri et al., so that the salient region is identified by a bounding box, such as taught by Stoppa et al., for the purpose of performing salient feature analysis using saliency heatmaps.
As per claim 3, Kogure, Sasayama, Spitzer, Chimalamarri and Stoppa et al. teach the method of claim 2, wherein the data for determining the point of focus is data representing multiple salient regions of the image to be outputted on the display, wherein the point of focus is determined based on the bounding box of a salient region having a highest saliency score or highest saliency ranking (Chimalamarri , paragraph 28, “the electronic device 20 selects the location of the third visual element 50c as the expected gaze position 72 because the third visual element 50c has the greatest saliency value among the visual elements 50”).
As per claim 4, Kogure, Sasayama, Spitzer, Chimalamarri and Stoppa et al. teach the method of claim 2, wherein the data for determining the point of focus is data representing multiple salient regions of the image to be outputted on the display (Kogure, paragraph 41, “the candidate detection portion 404 detects the object including saliency equal to or greater than a predetermined saliency in the top-down saliency map as the candidate for the visual confirmation target”, it is implicitly stated that there may be a plurality of potential targets with saliency equal to or greater than a predetermined saliency), wherein a respective multiple points of focus are determined based on the respective multiple bounding boxes of the respective multiple salient regions (Kogure, paragraph 41; Chimalamarri, paragraph 37, “the electronic device 20 selects a location of a particular visual element 50 as the expected gaze position 72 when the visual element 50 has a position value that is closest to the position indicated by the measured gaze target 80”, in other words, each one of a plurality of saliency regions, proximate to a sensed gaze, is a potential candidate for a focus point determination, furthermore, as per Stoppa, each saliency region is defined by a corresponding bounding box).
As per claim 12, it comprises similar limitations to those in claim 2 and it is therefore rejected for similar reasons.
As per claim 13, it comprises similar limitations to those in claim 3 and it is therefore rejected for similar reasons.
As per claim 14, it comprises similar limitations to those in claim 4 and it is therefore rejected for similar reasons.
As per claim 21, it comprises similar limitations to those in claim 2 and it is therefore rejected for similar reasons.
As per claim 22, it comprises similar limitations to those in claim 3 and it is therefore rejected for similar reasons.
As per claim 23, it comprises similar limitations to those in claim 4 and it is therefore rejected for similar reasons.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JOSE R SOTO LOPEZ whose telephone number is (571)270-5689. The examiner can normally be reached Monday-Friday, from 8 am - 5 pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Patrick Edouard can be reached on (571) 272-7603. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JOSE R SOTO LOPEZ/Primary Examiner, Art Unit 2622