DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
Applicant's arguments and amendments received February 13, 2026 have been fully considered. With regard to 35 U.S.C. § 103, Applicant argues that the cited prior art does not disclose [see applicant argument pages 8-9]. This language corresponds to the newly amended language of claims 1-8, 10-19 and 20.
As such, these have been considered but they are directed to newly amended language, which is addressed below. See the rejection below for how the art on record reads on the newly amended language as well as the examiner's interpretation of the cited art in view of the presented claim set. Furthermore, in response to applicant argument, Kishimoto teaches: at least, an image displayed on an image display device according to an area of a visual field or a blind spot which will be changed according to eye location of a driver. A line of sight sensor is used as the sensing device, and the eye location or a direction of the line of sight of the drive is sensed as an event to be sensed. The system of of Kishimoto teaches image control device determines whether a person exists use of daytime and nighttime analysis. As such, the Examiners position that Applicant has not yet submitted claims drawn to limitations, which define the operation and apparatus of Applicant's disclosed invention in manner, which distinguishes over the prior art. As it is Applicant's right to continue to claim as broadly as possible their invention. It is also the Examiners right to continue to interpret the claim language as broadly as possible. A prior art reference must be considered in its entirety, i.e., as a whole, including portions that would lead away from the claimed invention. W.L. Gore & Associates, Inc. v. Garlock, Inc., 721 F.2d 1540, 220 USPQ 303 (Fed. Cir. 1983). Therefore, based the broader claimed invention or interpretation, the examiner stands with the rejection.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1-8, 10-19 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Kishimoto US 2019/0031102 further in view of Fursich et al. US 2016/0137126.
In regarding to claim 1: Kishimoto teaches:
1. A method performed by a computing system, the method comprising: determining a viewpoint of a driver of a vehicle;
[0017] the image control device takes an image using at least an image pickup device located in a direction that is visible from sensed eye location among a plurality of the image display devices, senses an area to become the blind spot based on the eye location or the direction of the line of sight of the driver sensed, and corrects image data in a manner that an image of sensed area to become the blind spot from the driver is displayed on a screen of an image display device located at a position in the direction of the line of sight sensed among a plurality of the image display devices.
[0052] An image display system for vehicle use and a vehicle equipped with the image display system according to a first embodiment of the present invention will be described. In a conventional image display system for vehicle use, an area of an image taken by an image pickup device and an area of an image displayed on a screen of an image display device are fixed, so that an area to become a blind spot from a driver and the area of the image displayed on the screen of the image display device are not always coincided with each other when a position of a seat is adjusted according to the physique of the driver. In contrast, the first embodiment of the present invention is configured to correct an image displayed on an image display device according to an area of a visual field or a blind spot which will be changed according to eye location of a driver. A line of sight sensor is used as the sensing device, and the eye location or a direction of the line of sight of the drive is sensed as an event to be sensed.
Kishimoto, 0017, 0052, emphasis added.
obtaining one or more images of a portion of a physical environment outside of the vehicle that is obscured, from the viewpoint of the driver, by a portion of the vehicle;
[0017] the image control device takes an image using at least an image pickup device located in a direction that is visible from sensed eye location among a plurality of the image display devices, senses an area to become the blind spot based on the eye location or the direction of the line of sight of the driver sensed, and corrects image data in a manner that an image of sensed area to become the blind spot from the driver is displayed on a screen of an image display device located at a position in the direction of the line of sight sensed among a plurality of the image display devices.
[0052] An image display system for vehicle use and a vehicle equipped with the image display system according to a first embodiment of the present invention will be described. In a conventional image display system for vehicle use, an area of an image taken by an image pickup device and an area of an image displayed on a screen of an image display device are fixed, so that an area to become a blind spot from a driver and the area of the image displayed on the screen of the image display device are not always coincided with each other when a position of a seat is adjusted according to the physique of the driver. In contrast, the first embodiment of the present invention is configured to correct an image displayed on an image display device according to an area of a visual field or a blind spot which will be changed according to eye location of a driver. A line of sight sensor is used as the sensing device, and the eye location or a direction of the line of sight of the drive is sensed as an event to be sensed.
Kishimoto, 0017, 0052, emphasis added.
However, Kishimoto fails to explicitly teach but Fursich teaches:
applying parallax compensation to the one or more images based on the viewpoint of the driver to generate a compensated image;
[0150] As another aspect of the present invention, for small head movements only small field-of-view changes may be applied (preferably linear). However, in cases where the vehicle is entering a freeway or autobahn or changing to the left lane, the driver may want to know whether there is a vehicle in the critical blind spot area 17. By moving the head stronger (more of changing in position) to the right he or she may indicate his or her wish, and the system, responsive to a determination of such a head movement, may shift (in 2D) or roll (in 3D or pseudo parallax) the field-of-view stronger for increasing the covering of the blind-spot-area. The shifting/rolling may be linear, non-linear, exponential, sinusoidal, synodic shaped, logarithmic or may have or change or increase in a polynomial or discontinuous manner or behavior.
Fursich, 0149-0150 and 0172, emphasis added.
Accordingly, it would have been obvious to one ordinary skill in the art before the effective filing date to combine the teaching of Furich with the system of Kishimoto in order to applying parallax compensation to the one or more images based on the viewpoint of the driver to generate a compensated image, as such, enhance the depth perception of the driver…--0149.
Furthermore, Kishimoto teaches:
adjusting a visual characteristic of the compensated image based on a time of day;
and displaying the compensated image using a display component inside the vehicle.
[0017] the image control device takes an image using at least an image pickup device located in a direction that is visible from sensed eye location among a plurality of the image display devices, senses an area to become the blind spot based on the eye location or the direction of the line of sight of the driver sensed, and corrects image data in a manner that an image of sensed area to become the blind spot from the driver is displayed on a screen of an image display device located at a position in the direction of the line of sight sensed among a plurality of the image display devices.
[0052] An image display system for vehicle use and a vehicle equipped with the image display system according to a first embodiment of the present invention will be described. In a conventional image display system for vehicle use, an area of an image taken by an image pickup device and an area of an image displayed on a screen of an image display device are fixed, so that an area to become a blind spot from a driver and the area of the image displayed on the screen of the image display device are not always coincided with each other when a position of a seat is adjusted according to the physique of the driver. In contrast, the first embodiment of the present invention is configured to correct an image displayed on an image display device according to an area of a visual field or a blind spot which will be changed according to eye location of a driver. A line of sight sensor is used as the sensing device, and the eye location or a direction of the line of sight of the drive is sensed as an event to be sensed.
Kishimoto, 0017, 0052, 0058-0059, 0078, emphasis added.
Note: The motivation that was applied to claim 1 above, applies equally as well to claims 2-20 as presented blow.
In regarding to claim 2: Kishimoto and Fursich teaches:
2. The method of claim 1, further comprising: furthermore, Kishimoto teaches: detecting that a line of sight of the driver intersects the portion of the vehicle, wherein the computing system displays the compensated image in response to detecting that the line of sight of the driver intersects the portion of the vehicle.
[0017] the image control device takes an image using at least an image pickup device located in a direction that is visible from sensed eye location among a plurality of the image display devices, senses an area to become the blind spot based on the eye location or the direction of the line of sight of the driver sensed, and corrects image data in a manner that an image of sensed area to become the blind spot from the driver is displayed on a screen of an image display device located at a position in the direction of the line of sight sensed among a plurality of the image display devices.
[0052] An image display system for vehicle use and a vehicle equipped with the image display system according to a first embodiment of the present invention will be described. In a conventional image display system for vehicle use, an area of an image taken by an image pickup device and an area of an image displayed on a screen of an image display device are fixed, so that an area to become a blind spot from a driver and the area of the image displayed on the screen of the image display device are not always coincided with each other when a position of a seat is adjusted according to the physique of the driver. In contrast, the first embodiment of the present invention is configured to correct an image displayed on an image display device according to an area of a visual field or a blind spot which will be changed according to eye location of a driver. A line of sight sensor is used as the sensing device, and the eye location or a direction of the line of sight of the drive is sensed as an event to be sensed.
Kishimoto, 0017, 0052, emphasis added.
In regarding to claim 3: Kishimoto and Fursich teaches:
3. The method of claim 2, furthermore, Kishimoto teaches: wherein detecting that the line of sight of the driver of the vehicle intersects the portion of the vehicle comprises: receiving, from an interior camera of the vehicle, information indicating a head position of the driver;
[0058] In addition, the line of sight sensor 18 for sensing the eye location of the driver 1 or direction of the line of sight is provided at an interior portion of the vehicle 10, for example, at a ceiling portion above the room mirror 17. The line of sight sensor 18 is configured of an image pickup element to take an image of a face of a driver with visible rays and an image processing device to perform a face recognition processing from image data taken by the image pickup element, and to sense, an eye location or a direction of line of sight of the driver, for example. Alternatively, the line of sight sensor 18 may be configured to include an infrared ray image pickup element (an image pickup element having sensitivity in the infrared region) to receive far infrared rays emitted from the face of the driver, and an image processing device to perform a thermographic processing to image data taken by the image pickup device and to sense an eye location or a direction of line of sight of the driver from temperature distribution. The position of the line of sight sensor 18 is not limited in particular and it may be any position as long as it can take an image of the face of the driver, in other words, it may be provided within an area that comes into the field of view of the driver. In addition, these image processing devices can be shared by an image control device 19 which will be described later.
Kishimoto, 0058, emphasis added.
estimating the line of sight of the driver based on the head position of the driver;
[0058] In addition, the line of sight sensor 18 for sensing the eye location of the driver 1 or direction of the line of sight is provided at an interior portion of the vehicle 10, for example, at a ceiling portion above the room mirror 17. The line of sight sensor 18 is configured of an image pickup element to take an image of a face of a driver with visible rays and an image processing device to perform a face recognition processing from image data taken by the image pickup element, and to sense, an eye location or a direction of line of sight of the driver, for example. Alternatively, the line of sight sensor 18 may be configured to include an infrared ray image pickup element (an image pickup element having sensitivity in the infrared region) to receive far infrared rays emitted from the face of the driver, and an image processing device to perform a thermographic processing to image data taken by the image pickup device and to sense an eye location or a direction of line of sight of the driver from temperature distribution. The position of the line of sight sensor 18 is not limited in particular and it may be any position as long as it can take an image of the face of the driver, in other words, it may be provided within an area that comes into the field of view of the driver. In addition, these image processing devices can be shared by an image control device 19 which will be described later.
Kishimoto, 0058-0059 and 0066, emphasis added.
and determining that the line of sight of the driver intersects the portion of the vehicle
Kishimoto, 0058-0059, 0066
furthermore, Fursich teaches: based on a three-dimensional representation of the vehicle.
Fursich, 0149-0150 and 0172
In regarding to claim 4: Kishimoto and Fursich teaches:
4. The method of claim 2, furthermore, Kishimoto teaches: wherein detecting that the line of sight of the driver of the vehicle intersects the portion of the vehicle comprises: receiving, from an eye-tracking system of the vehicle, information indicating an eye position of the driver;
[0058] In addition, the line of sight sensor 18 for sensing the eye location of the driver 1 or direction of the line of sight is provided at an interior portion of the vehicle 10, for example, at a ceiling portion above the room mirror 17. The line of sight sensor 18 is configured of an image pickup element to take an image of a face of a driver with visible rays and an image processing device to perform a face recognition processing from image data taken by the image pickup element, and to sense, an eye location or a direction of line of sight of the driver, for example. Alternatively, the line of sight sensor 18 may be configured to include an infrared ray image pickup element (an image pickup element having sensitivity in the infrared region) to receive far infrared rays emitted from the face of the driver, and an image processing device to perform a thermographic processing to image data taken by the image pickup device and to sense an eye location or a direction of line of sight of the driver from temperature distribution. The position of the line of sight sensor 18 is not limited in particular and it may be any position as long as it can take an image of the face of the driver, in other words, it may be provided within an area that comes into the field of view of the driver. In addition, these image processing devices can be shared by an image control device 19 which will be described later.
Kishimoto, 0058-0059 and 0066, emphasis added.
estimating the line of sight of the driver based on the eye position of the driver;
[0058] In addition, the line of sight sensor 18 for sensing the eye location of the driver 1 or direction of the line of sight is provided at an interior portion of the vehicle 10, for example, at a ceiling portion above the room mirror 17. The line of sight sensor 18 is configured of an image pickup element to take an image of a face of a driver with visible rays and an image processing device to perform a face recognition processing from image data taken by the image pickup element, and to sense, an eye location or a direction of line of sight of the driver, for example. Alternatively, the line of sight sensor 18 may be configured to include an infrared ray image pickup element (an image pickup element having sensitivity in the infrared region) to receive far infrared rays emitted from the face of the driver, and an image processing device to perform a thermographic processing to image data taken by the image pickup device and to sense an eye location or a direction of line of sight of the driver from temperature distribution. The position of the line of sight sensor 18 is not limited in particular and it may be any position as long as it can take an image of the face of the driver, in other words, it may be provided within an area that comes into the field of view of the driver. In addition, these image processing devices can be shared by an image control device 19 which will be described later.
Kishimoto, 0058-0059 and 0066, emphasis added.
and determining that the line of sight of the driver intersects the portion of the vehicle
Kishimoto, 0058-0059 and 0066
Fursich teaches: based on a three-dimensional representation of the vehicle.
Fursich, 0149-0150 and 0172
In regarding to claim 5: Kishimoto and Fursich teaches:
5. The method of claim 1, furthermore, Kishimoto teaches: wherein displaying the compensated image using the display component comprises projecting the compensated image onto an interior surface of the portion of the vehicle.
[0055] As shown in FIG. 1, a plurality of image display devices 14a to 14e is provided at predetermined positions corresponding to the image pickup devices 11a and 11b. For example, the image display devices 14a and 14b are provided on the A-pillars 13a and 13b, which are causal objects that become blind spots from the driver 1, to display images taken by the image pickup devices 11a and 11b on their display screens. The image display devices 14a and 14b are the image display devices using flexible organic light emitting diodes (OLEDs), and as indicated by hatching in the drawing, the image display devices 14a and 14b are installed on the inner trims of the A-pillars 13a and 13b so as to be continuously arranged with a windshield 16 and door glasses 24a and 24b. By taking images in predetermined areas by the image pickup devices 11a and 11b and displaying the images in the areas of the blind spots due to the A-pillars 13a and 13b on the image display devices 14a and 14b, a scenery visible through the windshield 16 and the images displayed on the image display devices 14a and 14b and a scenery visible through the door glasses 24a and 24b are continuous, so that it can generate a sense of openness to the driver as if the A-pillars 13a and 13b do not exist. In particular, since the responsiveness of the OLEO is much faster than that of the liquid crystal display, the image can be switched at high speed even when the vehicle 1 is in high speed traveling, so that the driver 1 is less likely to feel unnatural. In addition, unlike the liquid crystal display, since the response speed of the OLEDs does not decrease even at low temperatures, it is possible to display a good quality even in the extremely cold interior environment of the vehicle in winter. Furthermore, since the image display devices 14a and 14b are provided near the eye location of the driver 1, it is possible to make the resolution much finer than that of the liquid crystal display. It is preferable to use OLEDs rarely having view angle dependency. Although not shown, image display devices using flexible OLEDs may be provided on the inner trims of the B-pillars 13c and 13d and the C-pillars 13e and 13f in the center portion of the vehicle. Furthermore, image display devices using translucent OLEDs may be provided on the door glasses in the vicinities of the B-pillars 13c and 13d and the C-pillars 13e and 13f. Still furthermore, an image display device using an OLED may be provided also in a dashboard 15, if necessary. It may be configured that a map image of a car navigation system is displayed while driving or an image of a television broadcast and so forth are displayed while parking, in addition to the images taken by the image pickup devices.
Kishimoto, 0055, emphasis added.
In regarding to claim 6: Kishimoto and Fursich teaches:
6. The method of claim 1, furthermore, Kishimoto teaches: wherein displaying the compensated image using the display component comprises displaying the compensated image using a display screen located on the portion of the vehicle.
[0055] As shown in FIG. 1, a plurality of image display devices 14a to 14e is provided at predetermined positions corresponding to the image pickup devices 11a and 11b. For example, the image display devices 14a and 14b are provided on the A-pillars 13a and 13b, which are causal objects that become blind spots from the driver 1, to display images taken by the image pickup devices 11a and 11b on their display screens. The image display devices 14a and 14b are the image display devices using flexible organic light emitting diodes (OLEDs), and as indicated by hatching in the drawing, the image display devices 14a and 14b are installed on the inner trims of the A-pillars 13a and 13b so as to be continuously arranged with a windshield 16 and door glasses 24a and 24b. By taking images in predetermined areas by the image pickup devices 11a and 11b and displaying the images in the areas of the blind spots due to the A-pillars 13a and 13b on the image display devices 14a and 14b, a scenery visible through the windshield 16 and the images displayed on the image display devices 14a and 14b and a scenery visible through the door glasses 24a and 24b are continuous, so that it can generate a sense of openness to the driver as if the A-pillars 13a and 13b do not exist. In particular, since the responsiveness of the OLEO is much faster than that of the liquid crystal display, the image can be switched at high speed even when the vehicle 1 is in high speed traveling, so that the driver 1 is less likely to feel unnatural. In addition, unlike the liquid crystal display, since the response speed of the OLEDs does not decrease even at low temperatures, it is possible to display a good quality even in the extremely cold interior environment of the vehicle in winter. Furthermore, since the image display devices 14a and 14b are provided near the eye location of the driver 1, it is possible to make the resolution much finer than that of the liquid crystal display. It is preferable to use OLEDs rarely having view angle dependency. Although not shown, image display devices using flexible OLEDs may be provided on the inner trims of the B-pillars 13c and 13d and the C-pillars 13e and 13f in the center portion of the vehicle. Furthermore, image display devices using translucent OLEDs may be provided on the door glasses in the vicinities of the B-pillars 13c and 13d and the C-pillars 13e and 13f. Still furthermore, an image display device using an OLED may be provided also in a dashboard 15, if necessary. It may be configured that a map image of a car navigation system is displayed while driving or an image of a television broadcast and so forth are displayed while parking, in addition to the images taken by the image pickup devices.
Kishimoto, 0055, emphasis added.
In regarding to claim 7: Kishimoto and Fursich teaches:
7. The method of claim 1, furthermore, Kishimoto teaches: wherein displaying the compensated image using the display component comprises displaying the compensated image on a head-up display of the vehicle.
[0055] As shown in FIG. 1, a plurality of image display devices 14a to 14e is provided at predetermined positions corresponding to the image pickup devices 11a and 11b. For example, the image display devices 14a and 14b are provided on the A-pillars 13a and 13b, which are causal objects that become blind spots from the driver 1, to display images taken by the image pickup devices 11a and 11b on their display screens. The image display devices 14a and 14b are the image display devices using flexible organic light emitting diodes (OLEDs), and as indicated by hatching in the drawing, the image display devices 14a and 14b are installed on the inner trims of the A-pillars 13a and 13b so as to be continuously arranged with a windshield 16 and door glasses 24a and 24b. By taking images in predetermined areas by the image pickup devices 11a and 11b and displaying the images in the areas of the blind spots due to the A-pillars 13a and 13b on the image display devices 14a and 14b, a scenery visible through the windshield 16 and the images displayed on the image display devices 14a and 14b and a scenery visible through the door glasses 24a and 24b are continuous, so that it can generate a sense of openness to the driver as if the A-pillars 13a and 13b do not exist. In particular, since the responsiveness of the OLEO is much faster than that of the liquid crystal display, the image can be switched at high speed even when the vehicle 1 is in high speed traveling, so that the driver 1 is less likely to feel unnatural. In addition, unlike the liquid crystal display, since the response speed of the OLEDs does not decrease even at low temperatures, it is possible to display a good quality even in the extremely cold interior environment of the vehicle in winter. Furthermore, since the image display devices 14a and 14b are provided near the eye location of the driver 1, it is possible to make the resolution much finer than that of the liquid crystal display. It is preferable to use OLEDs rarely having view angle dependency. Although not shown, image display devices using flexible OLEDs may be provided on the inner trims of the B-pillars 13c and 13d and the C-pillars 13e and 13f in the center portion of the vehicle. Furthermore, image display devices using translucent OLEDs may be provided on the door glasses in the vicinities of the B-pillars 13c and 13d and the C-pillars 13e and 13f. Still furthermore, an image display device using an OLED may be provided also in a dashboard 15, if necessary. It may be configured that a map image of a car navigation system is displayed while driving or an image of a television broadcast and so forth are displayed while parking, in addition to the images taken by the image pickup devices.
Kishimoto, 0055, emphasis added.
In regarding to claim 8: Kishimoto and Fursich teaches:
8. The method of claim 1, furthermore, Kishimoto teaches: wherein determining the viewpoint of the driver comprises: receiving, from an interior camera of the vehicle, information indicating a head position of the driver; and determining the viewpoint of the driver based on the head position of the driver.
[0058] In addition, the line of sight sensor 18 for sensing the eye location of the driver 1 or direction of the line of sight is provided at an interior portion of the vehicle 10, for example, at a ceiling portion above the room mirror 17. The line of sight sensor 18 is configured of an image pickup element to take an image of a face of a driver with visible rays and an image processing device to perform a face recognition processing from image data taken by the image pickup element, and to sense, an eye location or a direction of line of sight of the driver, for example. Alternatively, the line of sight sensor 18 may be configured to include an infrared ray image pickup element (an image pickup element having sensitivity in the infrared region) to receive far infrared rays emitted from the face of the driver, and an image processing device to perform a thermographic processing to image data taken by the image pickup device and to sense an eye location or a direction of line of sight of the driver from temperature distribution. The position of the line of sight sensor 18 is not limited in particular and it may be any position as long as it can take an image of the face of the driver, in other words, it may be provided within an area that comes into the field of view of the driver. In addition, these image processing devices can be shared by an image control device 19 which will be described later.
Kishimoto, 0058-0059 and 0078, emphasis added.
In regarding to claim 10: Kishimoto and Fursich teaches:
10. The method of claim 1, furthermore, Kishimoto teaches: wherein obtaining the one or more images comprises receiving one or more live video images from one or more external cameras of the vehicle.
Kishimoto, at least 0055.
In regarding to claim 2: Kishimoto and Fursich teaches:
11. The method of claim 11, wherein obtaining the one or more images comprises receiving two or more images from two or more external cameras of the vehicle, wherein the compensated image includes a merging of the two or more images.
Kishimoto, at least 0055.
Claims 12 list all similar elements of claim 1. Therefore, the supporting rationale of the rejection to claim 1 applies equally as well to claim 12. Furthermore, Kishimoto teaches detecting and opaque portion.
Kishimoto, at least 0055.
In regarding to claim 13: Kishimoto and Fursich teaches:
13. The method of claim 12, furthermore, Kishimoto teaches: wherein displaying the compensated image on the surface of the opaque portion of the vehicle comprises projecting the compensated image onto the surface of the opaque portion of the vehicle.
Kishimoto, at least 0055.
In regarding to claim 14: Kishimoto and Fursich teaches:
14. The method of claim 12, furthermore, Kishimoto teaches: wherein detecting that a line of sight of the driver of the vehicle intersects the opaque portion of the vehicle comprises receiving information from a camera or an eye-tracking system.
Kishimoto, at least 0055.
In regarding to claim 15: Kishimoto and Fursich teaches:
15. The method of claim 12, further comprising: furthermore, Kishimoto teaches: determining the viewpoint of the driver based on information received from a camera or from an eye-tracking system.
Kishimoto, at least 0055.
Claims 16-20 list all similar elements of claims 1-4 and 8, but in system form rather than method form. Therefore, the supporting rationale of the rejection to claims 1-4 and 8 applies equally as well to claims 16-20. Furthermore, Fursich teaches: a system, processor and memory.
Fursich, 0122
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to DANIEL T TEKLE whose telephone number is (571)270-1117. The examiner can normally be reached Monday-Friday 8:00-4:30 ET.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, William Vaughn can be reached at 571-272-3922. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/DANIEL T TEKLE/Primary Examiner, Art Unit 2481