Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
The Amendment filed December 10 2025 has been entered and considered. Claims 1, 3, 5-9, 11, and 20 have been amended. Claims 2, 4, and 10 have been canceled. Claim 12 was added. The amendment does not overcome the rejection under 35 U.S.C. 103 previously set forth. The rejection is maintained and is restated below with additional discussion of the amended limitations; accordingly, this action is made final.
Drawing Objections –
In view of the amendments to the drawings, the objections are withdrawn as moot.
Specification Objections –
In view of the amendments to the specification, the objections are withdrawn as moot.
Response to Arguments
Applicant's arguments filed 12/10/25 have been fully considered but they are not persuasive.
Applicant argues that the prior art does not disclose the newly added amendments to the independent claims. Remarks of 12/10/25 at Pg. 14. Examiner respectfully disagrees.
Applicant argues (Pgs. 14):
For example, Applicant submits that Yukito does not teach to predict that the vehicle is to depart from a traveling lane.Examiner responds:
Yukito discloses (Pg. 9, “The state specifying unit 45 specifies the running state of the vehicle V based on the output results from the outer camera 12… for example, the state identifying unit 45 determines whether or not the vehicle V is meandering based on the traveling direction of the vehicle V, the traveling locus of the vehicle V, the positional relationship between the vehicle V and the lane, and the like.”; Pg. 9, “The danger level setting unit 46 calculates the degree of danger during traveling based on… the state identification unit 45, and sets the danger level.”; Pg. 10, “Further, when the danger level is set by the danger level setting unit 46, the control unit 40 causes the notification unit 20 to notify the driver of the alert in a different form in accordance with the set danger level.”). Determining the traveling direction of the vehicle as an indication of “meandering” alongside the positional relationship between the vehicle and the lane, as well as assessing a degree of danger based on this information and outputting a notification, fundamentally describes a prediction. The method of Yukito assesses the travel direction and position of the vehicle, using them to “predict” whether this may be a dangerous scenario that would require outputting a notification.
Claim Objections
Claim 12 is objected to because of the following informalities: Line 20 recites “predetermined interval period” should read “a predetermined interval period”. Appropriate correction is required.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1, 3, 5, 8-9, 11 are rejected under 35 U.S.C. 103 as being unpatentable over Yukito (Previously cited).
Regarding claim 1, Yukito teaches a driving support device that is detachably attached to a vehicle via a detachable member (Pg. 4, “As shown in FIG. 1, the drive recorder 10 is formed in a substantially rectangular box shape, and is attached to a windshield glass WS of a vehicle”), the driving support device comprising: a storage medium storing computer-readable instructions (Pg. 4, “the drive recorder 10 includes… a storage unit 30”); and one or more processors connected to the storage medium, wherein the one or more processors execute the computer-readable instructions to perform (Pg. 15, “The computer system mentioned here includes hardware such as an OS and peripheral devices.”, processor is necessary): acquiring one or more images obtained by imaging a surrounding situation of the vehicle (Pg. 4, “Then, the outer camera 12 captures an image of a situation outside the vehicle V”), predicting that the vehicle is to depart from a traveling lane in which the vehicle is traveling, based on information acquired from the one or more images, resulting in a lane prediction (Pg. 9, “The state specifying unit 45 specifies the running state of the vehicle V based on the output results from the outer camera 12… for example, the state identifying unit 45 determines whether or not the vehicle V is meandering based on the traveling direction of the vehicle V, the traveling locus of the vehicle V, the positional relationship between the vehicle V and the lane, and the like.”; Pg. 9, “The danger level setting unit 46 calculates the degree of danger during traveling based on… the state identification unit 45, and sets the danger level.”; Pg. 10, “Further, when the danger level is set by the danger level setting unit 46, the control unit 40 causes the notification unit 20 to notify the driver of the alert in a different form in accordance with the set danger level.”), determining a notification intensity of a notification for a driver of the vehicle, based on the lane prediction (Pg. 10, “Further, when the danger level is set by the danger level setting unit 46, the control unit 40 causes the notification unit 20 to notify the driver of the alert in a different form in accordance with the set danger level.”), and causing a notifier to: output a notification of the determined notification intensity, wherein the target comprises a road marking of a lane in which the vehicle is traveling (Pg. 4, “Then, the outer camera 12 captures an image of a situation outside the vehicle V”; Pg. 9, “the shape of the road (lane)”) and a driver of the vehicle (Pg. 4, “Then, the inner camera 14 captures an image of the situation in the cabin of the vehicle V (mainly, the driver's face, eyes, and the like)”), and the change of the target comprises a change of a position of the vehicle relative to the road marking (Pg. 9, “Further, for example, the state identifying unit 45 determines whether or not the vehicle V is meandering based on… the positional relationship between the vehicle V and the lane”) and a change in behavior of the driver (Pg. 9, “Specifically, the state specifying unit 45 specifies the state of the traveling driver… the state identification unit 45 detects the driver's eyes and the like, and detects whether the driver is looking aside, whether the driver is drowsy, or the like”), wherein the one or more images comprises an image in which a face of the driver driving the vehicle appears, output a notification of an alarm with a first predetermined intensity when it is determined: (a1) that a direction of the driver's face or sightline based on the one or more images is fixed to a traveling direction and is not a leftward direction of the vehicle for a predetermined time or more (Pg. 7, “The driver's state includes, for example, a state of looking straight ahead, a state of looking aside”) and (a2) that the vehicle is swinging in the leftward direction on a basis of the one or more images (Pg. 7, “The items of the state of the vehicle V include, for example, a state in which the vehicle V is traveling straight, a state in which the vehicle V is about to make a right or left turn… A state of meandering or not. The first state cost is set so as to increase as the degree of danger to the state of the vehicle V increases.”), and output a notification of an alarm with a lower intensity than the first predetermined intensity or causing the notifier not to output a notification of an alarm when one or both of (a1) and (a2) are not satisfied (Pg. 7, “The second state cost is set to increase as the degree of danger to the driver's state and the surrounding state of the vehicle V increases. For example, when the driver is looking aside and the vehicle ahead is approaching at the risk of an accident between vehicles, the second state cost is set to “18”.”, both the driver sightline and the vehicle swinging direction is taken into account when determining danger. The notification intensity (state cost) is increased when the degree of danger increased. One of ordinary skill in the art would recognize the opposite to be true as well), output a notification of an alarm with a second predetermined intensity when it is determined: (b1) that the direction of the driver's face or sightline based on the one or more images is fixed to the traveling direction and is not a rightward direction of the vehicle for a predetermined time or more, and (b2) that the vehicle is swinging in the rightward direction on the basis of the one or more images, and output a notification of an alarm with a lower intensity than the second predetermined intensity or causing the notifier not to output a notification of an alarm (Same analysis as above).
Yukito does not explicitly link the driver gaze detection and the vehicle swinging detection to determine a notification intensity. However, they do provide examples of other factors which are linked to determine a notification intensity (Pg. 7). And on pg. 2 the “setting unit” sets a danger level based on the road environment, the travelling state of the vehicle, the state of the driver, and a peripheral state of the vehicle. The state of the vehicle is said to include “a state in which the vehicle is meandering”. While this is not an anticipatory enumeration of all possible combinations, it is a teaching that the combination of these factors – and probably all of these factors - should be used to determine a danger level.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Yukito to incorporate a link between the driver gaze detection and the vehicle swinging detection to determine a notification intensity. Yukito utilizes both driver gaze detection and the vehicle swinging detection to determine a degree of danger. They also disclose an example of linking multiple factors when assessing a degree of danger (Pg. 7, “For example, when the driver is looking aside and the vehicle ahead is approaching at the risk of an accident between vehicles, the second state cost is set to “18”.”). One of ordinary skill in the art would understand that combining driver gaze and vehicle swing state to trigger alarms with predetermined intensity when both conditions are satisfied, or lowering the alarms otherwise, tailors alerts more effectively to the specific situation and decreases the chance of false alarms.
Regarding claim 3, Yukito teaches all of the elements of claim 1, as stated above, as well as wherein the one or more processors execute the computer-readable instructions to perform: causing the notifier to output a notification of a higher notification intensity when a direction of the driver's face or sightline based on the one or more images is outside of a preset range than when the direction of the driver's face or sightline is not outside of the preset range (Pg. 9, “the state identification unit 45 detects the driver's eyes and the like, and detects whether the driver is looking aside, whether the driver is drowsy, or the like”; Pg. 9, “The danger level setting unit 46 calculates the degree of danger during traveling based on the output results from the… state identification unit 45, and sets the danger level”, the state identification unit considers a driver’s sightline being outside of the preset range, indicating that a higher danger level (notification intensity) would be output).
Regarding claim 5, Yukito teaches all of the elements of claim 1, as stated above, as well as wherein the one or more images further comprises an image in which a rear nearby area behind the vehicle in a neighboring lane adjacent to the traveling lane appears (Pg. 4, “Then, the outer camera 12 captures an image of a situation outside the vehicle V (a situation in front of the vehicle, behind the vehicle, and a side of the vehicle)”; Pg. 9, “Specifically, the state specifying unit 45 specifies the traveling state of the vehicle V based on the shape of the road (lane)”), and wherein the one or more processors execute the computer-readable instructions to further perform: causing the notifier to output a notification of a third alarm with a higher intensity than the predetermined intensity when (a1) and (a2) are satisfied, and it is determined (a3) that there is a vehicle in a rear nearby area on the left of the vehicle on the basis of the one or more images (Pg. 9, “the state specifying unit 45 outputs information on the corresponding item to be described later. Is output to the danger level setting unit 46.”; Pg. 9, “In addition, the state identification unit 45 detects vehicles and people around the vehicle V, and determines whether there is another vehicle ahead of the vehicle V, and whether the inter-vehicle distance with another vehicle ahead is ensured.”, detects vehicles around the vehicle as well as inter-vehicle distance and outputs this information to the danger level setting unit (notification control)); causing the notifier to output a notification of a second alarm with an intensity higher than the predetermined intensity and lower than that of the third alarm when (a1) and (a2) are satisfied, and (a3) is not satisfied (Pg. 10, “Then, the danger level setting unit 46 adds the weather cost, the accident risk cost, the first state cost, and the second state cost to the extracted data cost, and adds the first personal cost and the second personal cost.”; Pg. 10, “Specifically, when the corrected data cost is "1" or more and "3" or less, the danger level is set to 1.When the corrected data cost is "4" or more and "6" or less, the danger level is set to 2. When the data cost is “7” or more and “9” or less, the risk level is set to 3. When the corrected data cost is “10”or more and “12” or less, the risk level is set to 4 and the corrected data cost is “13”. When the value is equal to or less than "15", the risk level is set to 5, and when the correction data cost is equal to or more than "16" and equal to or less than "18", the risk level is set to 6. Then, the higher the danger level, the higher the danger level.”, multiple factors are considered to determine the intensity of the notification/danger level. One of ordinary skill in the art would understand that linking specifically (a1), (a2), and (a3) to determine the notification intensity would be a routine optimization of the method already disclosed by Yukito);
causing the notifier to output a notification of the third alarm with a higher intensity than the predetermined intensity when (b1) and (b2) are satisfied, and it is determined (b3) that there is a vehicle in a rear nearby area on the right of the vehicle on the basis of the one or more images; and causing the notifier to output a notification of the second alarm when (b1) and (b2) are satisfied, and (b3) is not satisfied (Same analysis as above).
Regarding claim 8, Yukito teaches all of the elements of claim 1, as stated above, as well as causing the notifier to output a notification of an alarm with a higher intensity than the predetermined intensity when it is determined on the basis of the one or more images that there is a vehicle in the rear nearby area than that of when it is determined that there no vehicle in the rear nearby area (Pg. 9, “the state specifying unit 45 outputs information on the corresponding item to be described later. Is output to the danger level setting unit 46.”; Pg. 9, “In addition, the state identification unit 45 detects vehicles and people around the vehicle V, and determines whether there is another vehicle ahead of the vehicle V, and whether the inter-vehicle distance with another vehicle ahead is ensured.”, detects vehicles around the vehicle as well as inter-vehicle distance and outputs this information to the danger level setting unit (notification control). One of ordinary skill in the art would recognize that the state specifying unit would output an increased degree of danger to the notification control if a vehicle is present in the rear nearby area compared to there being no vehicle).
Regarding claim 9, Yukito teaches all of the elements of claim 1, as stated above, as well as including a communicator configured to transmit information for causing the notifier to output the notification to the notifier; a first camera: not connected to an onboard network of the vehicle, and configured to acquire a first image which is included in the one or more images, wherein the first image comprises a surrounding situation on the front of the vehicle; a second camera: not connected to the onboard network of the vehicle, and configured to capture a second image which is included in the one or more images, wherein the second image comprises the driver, a window of a driver's seat side of the vehicle, a window on a passenger's seat of the vehicle, and a rear area of the vehicle; and a housing accommodating the driving support device, the first camera, and the second camera (Pg. 4, “As shown in FIG. 2, the drive recorder 10 includes an outer camera 12, an inner camera 14, a GPS receiver 16, a sensor 18, a communication unit 28, a storage unit 30, and a control unit 40. It is composed of In addition, the drive recorder 10 has a notifying unit 20 for notifying the driver of an alert.”).
Regarding claim 11, the driving support method performs variably the same function as that of claim 1. It is rejected under the same analysis.
Claim(s) 6-7 are rejected under 35 U.S.C. 103 as being unpatentable over Yukito in view of Sathyanarayana et al. (US Patent Pub. No. 2018/0365533 A1, published 2018).
Regarding claim 6, Yukito teaches all of the elements of claim 1, as stated above, as well as causing the notifier to output a notification of an alarm with a predetermined intensity when it is determined (c2) that the vehicle is swinging in the leftward direction on the basis of the one or more images and (d2) that the vehicle is swinging in the rightward direction on the basis of the one or more images and causing the notifier to output a notification of an alarm (Pg. 7, “The items of the state of the vehicle V include, for example, a state in which the vehicle V is traveling straight, a state in which the vehicle V is about to make a right or left turn… A state of meandering or not.).
Yukito does not explicitly disclose wherein one or more images include an image in which one or both of an arm or a hand of a driver appear or to determine (c1) that the hand or the arm is not performing an operation of controlling steering of moving the vehicle to the left/right on the basis of the one or more images.
Sathyanarayana teaches wherein one or more images comprises an image in which one or both of an arm or a hand of a driver appear or to determine (c1) that the hand or the arm is not performing an operation of controlling steering of moving the vehicle to the left/right on the basis of the one or more images (Para. 55, “In a first variation, determining driver control inputs includes performing object recognition and classification to determine positions of a driver's hands on a steering wheel of the vehicle, determining trajectories of the driver's hand in the rotational plane of the steering wheel, and computing a steering input parameter (e.g., a number of degrees through which the steering wheel was rotated) from the imagery data.”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Yukito to incorporate the teachings of Sathyanarayana to include one or more images comprises an image in which one or both of an arm or a hand of a driver appear or to determine (c1) that the hand or the arm is not performing an operation of controlling steering of moving the vehicle to the left/right on the basis of the one or more images. The obviousness analysis under claim 1 is incorporated herein by reference. Yukito discloses varying notification intensities based on a number of factors related to the vehicle and driver state, even giving examples of linked factors influencing the degree of danger. Sathyanarayana provides another factor to be considered, the position of the hands/arms on the steering wheel as well as the trajectory in the rotational plane applied by the driver on the steering wheel. One of ordinary skill in the art would recognize that including hand position on the steering wheel allows for a more robust detection of vehicle meandering, as it can be determined whether the driver is causing the movement or not, providing a more accurate assessment of the degree of danger.
Regarding claim 7, the analysis and rationale set forth for claim 5 are equally applicable to claim 7, with claim 6 serving as the base claim in place of claim 1. No additional limitations are present in claim 7 that would alter the conclusion of obviousness.
Claim(s) 12 is rejected under 35 U.S.C. 103 as being unpatentable over Yukito in view of Dijksterhuis et al. (NPL, “An Adaptive Driver Support System: User Experiences and Driving Performance in a Simulator”, published 2012, pdf attached).
Regarding claim 12, Yukito teaches a driving support device that is detachably attached to a vehicle via a detachable member (Pg. 4, “As shown in FIG. 1, the drive recorder 10 is formed in a substantially rectangular box shape, and is attached to a windshield glass WS of a vehicle”), the driving support device comprising: a storage medium storing computer-readable instructions (Pg. 4, “the drive recorder 10 includes… a storage unit 30”); and one or more processors connected to the storage medium, wherein the one or more processors execute the computer-readable instructions to perform (Pg. 15, “The computer system mentioned here includes hardware such as an OS and peripheral devices.”, processor is necessary): acquiring one or more images obtained by imaging a surrounding situation of the vehicle (Pg. 4, “Then, the outer camera 12 captures an image of a situation outside the vehicle V”), predicting that the vehicle is to depart from a traveling lane in which the vehicle is traveling, based on information acquired from the one or more images, resulting in a lane prediction (Pg. 9, “The state specifying unit 45 specifies the running state of the vehicle V based on the output results from the outer camera 12… for example, the state identifying unit 45 determines whether or not the vehicle V is meandering based on the traveling direction of the vehicle V, the traveling locus of the vehicle V, the positional relationship between the vehicle V and the lane, and the like.”; Pg. 9, “The danger level setting unit 46 calculates the degree of danger during traveling based on… the state identification unit 45, and sets the danger level.”; Pg. 10, “Further, when the danger level is set by the danger level setting unit 46, the control unit 40 causes the notification unit 20 to notify the driver of the alert in a different form in accordance with the set danger level.”), determining a notification intensity of a notification for a driver of the vehicle, based on the lane prediction, causing a notifier to output a notification of the determined notification intensity (Pg. 10, “Further, when the danger level is set by the danger level setting unit 46, the control unit 40 causes the notification unit 20 to notify the driver of the alert in a different form in accordance with the set danger level.”), causing the notifier to output a first notification of an alarm in a case that a value of the danger is higher and it is predicted that the vehicle is to depart from the traveling lane, and causing the notifier to output a second notification of an alarm with a lower intensity than an intensity of the first notification in a case that a value of the danger is lower and it is predicted that the vehicle is to depart from the traveling lane (Pg. 7, “The second state cost is set to increase as the degree of danger to the driver's state and the surrounding state of the vehicle V increases. For example, when the driver is looking aside and the vehicle ahead is approaching at the risk of an accident between vehicles, the second state cost is set to “18”.”, The notification intensity (state cost) is increased when the degree of danger increased. One of ordinary skill in the art would recognize the opposite to be true as well).
Yukito does not explicitly disclose calculating a noise area indicating a range in which the vehicle travels with a deviation from the center line, determining whether the vehicle travels outside of the noise area at predetermined interval periods, or counting up a counter in a case that the vehicle travels outside of the noise area. However, they do detect vehicle position related to the lane markers as well as the vehicle meandering/path.
PNG
media_image1.png
101
159
media_image1.png
Greyscale
Dijksterhuis teaches calculating a noise area, wherein the noise area is an area indicating a range in which the vehicle travels with a deviation from the center line of the traveling lane of the vehicle when the vehicle is traveling and depending on driving of a driver of the vehicle (Fig. 2C, reprinted below; Pg. 776, “Finally, all 300 LP values were used to calculate SDLP, which triggered the HUD when more than 22 cm.”, SDLP (Standard Deviation of Lane Position) inherently represents the spread of lateral position), determining whether the vehicle travels outside of the noise area or not at predetermined interval period (Pg. 776, “Finally, all 300 LP values were used to calculate SDLP, which triggered the HUD when more than 22 cm.”; Pg. 775, “The vehicle’s LP, defined as the distance between the center of the participant’s car and the middle of the (right-hand) driving lane, was sampled at 10 Hz”), counting up a counter, in a case that the vehicle travels outside of the noise area, counting down the counter, in a case that the vehicle travels does not outside of the noise area (Pg. 775, “Each second, the support algorithm counted the number of LP samples values inside these near edge zones for the preceding 30 s and divided it by the total number of samples during this period (300 samples). Driving in the near-edge zones for more than 7.5 s (25%) triggered the HUD. For the second trigger variable, driving in the over-edge zones, the threshold was set to 3 s (10%). Finally, all 300 LP values were used to calculate SDLP, which triggered the HUD when more than 22 cm.”, maintaining a count of samples within a condition over a moving window is an implementation of temporal accumulation of events. This is analogous to incrementing a counter when the condition is met and decrementing or allowing decay when the condition is not met), causing the notifier to output a first notification of an alarm in a case that a value of the counter is equal to or greater than a threshold value (Pg. 775, “Driving in the near-edge zones for more than 7.5 s (25%) triggered the HUD.”) and causing the notifier to output nothing in a case that a value of the counter is less than the threshold value. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Yukito to incorporate the teachings of Dijksterhuis to include calculating a noise area indicating a range in which the vehicle travels with a deviation from the center line, determining whether the vehicle travels outside of the noise area at predetermined interval periods, or counting up a counter in a case that the vehicle travels outside of the noise area. Yukito discloses a driver support method which takes into account vehicle position, vehicle travel locus, and vehicle position in relation to lane markings. They use this information (alongside other factors) to determine what intensity is needed for an output notification. However, they do not mention adaptively determining a range in which the car should ideally be and performing notification processing based on that. Dijksterhuis discloses an adaptive driver support system which takes into account vehicle position over time and creates a range to be used for notifying drivers once it has been surpassed. They also disclose making a determination on vehicle position based on the sampled time it has spent driving in specific areas. One of ordinary skill in the art would understand that incorporating Dijksterhuis’ method, which uses the statistical evaluation of vehicle lateral position over time (standard deviation of lateral position and temporal accumulation of deviation-related samples) into the system of Yukito, which controls notification intensities based on lane departure prediction, to provide graduated warnings corresponding to the likelihood or severity of lane departure based on a defined tolerance area (noise area) and the accumulated deviation information (counter). This would predictably improve the capability to distinguish normal driving variation from unintended lane departure and therefore increase the accuracy of warnings. “On closer examination, it seems that these effects were mainly caused by the adaptive support. Compared with either the nonadaptive or the no-support mode, participants drove more centrally, swerved less, and drove less on the shoulder.”, as disclosed by Dijksterhuis.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action
Any inquiry concerning this communication or earlier communications from the examiner should be directed to DAVID A WAMBST whose telephone number is (703)756-1750. The examiner can normally be reached M-F 9-6:30 EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Gregory Morse can be reached at (571)272-3838. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/DAVID ALEXANDER WAMBST/Examiner, Art Unit 2663
/GREGORY A MORSE/Supervisory Patent Examiner, Art Unit 2698