DETAILED ACTION
Claims 1 and 14-19 are pending in this application and have been examined under the priority date of 01/30/2023 in accordance with the applicant’s claim for foreign priority. Claims 1 and 14-19 have been amended in this application and claims 2-13 have been canceled.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55.
Information Disclosure Statement
The information disclosure statements (IDS) submitted on 08/25/2023 and 08/20/2025 are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statements are being considered by the examiner.
Response to Arguments
35 U.S.C. 112(f)
Applicant’s arguments (see Remarks filed 2/26/2025) have been fully considered by the examiner and in view of the amendments to the claims, the interpretations under 35 U.S.C. 112(f) have been withdrawn.
35 U.S.C. 102
Applicant’s arguments (see Remarks filed 2/26/2025) have been fully considered by the examiner and are not persuasive. The applicant argues that Doria fails to teach any one of three axes matching the reference axes of the road. The examiner respectfully disagrees, the limitation of amended claims 1, 18 and 19 recites “the reference coordinate system has three coordinate axes, one of which is parallel to a lane marking on the road or is parallel to a moving direction of the camera calculated on a basis of a movement of the camera”, indicating that an axis of reference coordinate system must be parallel to either the road markings or the moving direction of the camera, but not both. Doria teaches in [0087] that the system derives a frustrum, which is analogous to a 3D reference coordinate system to locate a target object to which the frustrum has overlap, further [0050] the sign (target) object lies between two parallel planes of intersection defined by the focal distance and other distances determined by the image sensor. Additionally, in Doria figures 3 and figures 4 A-B show the determination of these planes and focal points based on the image, where in figure 3, road marking is depicted, as well as a sign (42), and in figure 4B where the planes are shown (D1 and D2). Since these two planes are a focal distance plane determined by the vehicle’s camera, it would be understood by one of ordinary skill in the art that this would be parallel to a lane line when the focal is distance is from a camera on the vehicle’s front where the vehicle is traveling forward in the lane, as shown in figure 10. Therefore, the examiner maintains that Doria teaches this limitation and that one of ordinary skill in the art prior to the effective filing date of the presently claimed invention would understand this. For at least the above reasons, the Examiner respectfully maintains the rejections under 35 U.S.C. 102(a)(1) in view of Doria.
PNG
media_image1.png
199
273
media_image1.png
Greyscale
(Doria Figure 4B)
PNG
media_image2.png
728
296
media_image2.png
Greyscale
(Doria, Figure 10)
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 1, and 14-19 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Doria (US 20200193195 A1).
Regarding claim 1 Doria discloses; An information processing apparatus comprisingone or more hardware processors configures to perform (Doria, [0008] non-transitory computer readable media is use to execute code so that processors can perform tasks):
acquiring image data (Doria, [0031] image sensors (image acquisition unit) may acquire images of the surroundings);
acquiring three-dimensional point cloud data including three-dimensional points (Doria, [0036] the system may include a point cloud analyzer, [0085] the system receives point cloud data associated with the region around the vehicle), each of the three-dimensional points representing a three-dimensional position of an object included in the image data (Doria, [0026] the system uses the point cloud to find positions of the objects in 3D space);
specifying a reference coordinate system representing a reference of a three-dimensional position (Doria, [0087] a processor (functionally equivalent to a coordinate specifying unit per [0160] of applicant’s specification) derives a frustrum from the image and a degree of overlap between the sign (the object) and the frustrum is generated, which indicates the frustrum is being used a 3D reference point,);
detecting, from the image data, a two-dimensional target area in which a designated target object is included (Doria, [0045] the image analyzer (area detection unit) may use edge detection to determine the sign location in the image, because edge detection uses pixel features to determine pixels in the image belonging to the sign this would be analogous to determining the area in the image which the sign (target object) is located, further [0047] states that the system may also determine a subregion in the image which contains a sign (target object));
extracting from the three-dimensional point cloud data, extraction point cloud data representing a three-dimensional position of an object included in the target area (Doria, [0087] the processor (functionally equivalent to the extraction unit as described in [0160] of applicant’s specification) identifies association sets in the point cloud data to determine the sign point cloud coordinate candidates (3D position of the object) [0100] road segments are identified as being in specific geographic regions based on the image);
acquiring a target object model being information obtained by modeling a shape of a first portion (Doria, [0028] the image may be segmented to obtain the objects, the objects shape, size and textures can be extracted), the first portion being at least part of the designated target object (Doria, [0047] the image is divided into subregions, the subregions have a variety of shapes and the regions contain portions of sign (target object), [0052] the point cloud data consists of classified neighborhoods that have been designated as signs and their positions, where the neighborhoods/regions may be planar shapes);
generating target object information representing a position and an orientation of the target object model (Doria, [0053] the sign detector (generation unit) generates a set of possible location coordinates for a sign, [0026] each sign detected has 3D coordinates and degree of overlap with a sighting frustrum, since the position coordinates are in 3D the sign would be detected having a position in 3D space meaning it would also have to have an orientation or angle as well), each corresponding to a case where the target object model is arranged in a three-dimensional space to follow the designated target object (Doria, [0026] each sign detected (target object) has 3D coordinates and degree of overlap with a sighting frustrum);
outputting the target object information (Doria, [0026] the sign positions and associations are output the user and then displayed, (functionally equivalent to an output unit)), wherein
the image data is generated by a camera installed in a mobile entity traveling on a road (Doria, [0031] vehicle (124) may have an image sensor or camera to collect data and images of the surroundings),
the reference coordinate system has three coordinate axes (Doria [0053] the signs are determined by the sign detector (generation unit) by matching the point cloud data to the sighting frustrum in the 3D coordinate set), one of which is parallel to a lane marking on the road or is parallel to a moving direction of the camera calculated on a basis of a movement of the camera (Doria, [0087] the frustrum sighting point (the region in which the target object is detected/referenced) is determined based on properties of the camera and its position, [0050] the sign (target) object lies between two parallel planes of intersection defined by the focal distance and other distances determined by the image sensor, Figure 3 and figure 4 A-B show the determination of these planes and focal points based on the image, where in figure 3, road marking are depicted, as well as a sign (42), and in figure 4B where the planes are shown (D1 and D2), since these two planes are a focal distance plane determined by the vehicle’s camera, it can be interpreted that this would be parallel to a lane line when the focal is distance is from a camera on the vehicle’s front where the vehicle is traveling forward in the lane, as shown in figure 10, camera (115)),
PNG
media_image1.png
199
273
media_image1.png
Greyscale
(Doria Figure 4B)
PNG
media_image2.png
728
296
media_image2.png
Greyscale
(Doria, Figure 10)
the first portion of the designated target object is a plane and is parallel to one or two of the three coordinate axes in the reference coordinate system (Doria, [0052] the point cloud data includes multiple neighborhoods previously classified as being planar and having been designated as having a sign with the sign position noted, the road sign plane would be parallel to the vertical axis (Y axis) of the direction the vehicle travels in, where the vehicle’s direction is defined by a set of coordinates in 3D (xyz)),
the target object model is an equation of the plane (Doria, [0038] each neighborhood of points in the point cloud defined as having a sign (target object) is planar, and the planar neighborhood region can be defined by equations 1-3 in paragraph [0038] of Doria),
and the one or more hardware processors are further configured to perform (Doria, [0008] non-transitory computer readable media is use to execute code so that processors can perform tasks):
in generating of the target object information, constraining the plane represented by the equation in parallel with the coordinate axis being parallel to the first portion in the reference coordinate system (Doria, [0050] the location of the road sign lies between two parallel planes, [0038] where the location of the road sign is in a planar neighborhood region of the image, therefore the plane in which the sign lies (which is represented by the equations 1-3) would be constrained by the two parallel planes in [0050] of Doria, Figure 4B illustrates this),
generating an estimated plane minimizing a distance to each of the three- dimensional points included in the extraction point cloud data (Doria, [0065]-[0066] the sign detection control calculated focal overlap, which is the minimum value between the coverage value and parsimony values, it may be the output of a minimizing function which takes the smaller of the two values, [0062] where the coverage value is the portion of intersection between the sighting frustrum and the points in the neighborhood (region containing the target object, which is a set of 3D points) and [0064] the parsimony value indicates the fraction of points intersecting the two planes (the frustrum and the neighborhood), both measures are different indications of overlap of the estimated plane (sighting frustrum) and the extracted 3D points (planar neighborhood of points containing the target object), therefore minimizing these two values would minimize the overlap or distance between the two sets of planar data),
and generating the target object information representing a position and an orientation of the estimated plane that is generated (Doria, [0057] the overlap detection is used to verify the position of the sign associated with the position candidate (estimated plane) using the methods described in [0060]-[0065] as cited above, [0048] – [0050] the algorithm (analogous to the detection unit) uses a frustrum (a plane for detecting the object) which is a plane in 3D space giving it a position and an orientation, further the frustrum may be estimated as a plane extending in a given direction (orientation)).
Regarding claim 14, Doria discloses; The information processing apparatus the one or more hardware processors are further configured to perform:
detecting an orientation of the first portion in the designated target object (Doria, [0048] – [0050] the algorithm (analogous to the detection unit) uses a frustrum (a plane for detecting the object) which is a plane in 3D space giving it a position and an orientation, further the frustrum may be estimated as a plane extending in a given direction (orientation)),
in generating of the target object information, generating and the generation unit generates the target object information by fitting the target object model to the extraction point cloud data under a condition that the orientation of the target object model matches a detected orientation of the first portion in the designated target object (Doria, [0053] the sign detector (generation unit) generates a set of possible location coordinates for a sign, [0026] each sign detected has 3D coordinates and degree of overlap with a sighting frustrum, since the position coordinates are in 3D the sign would be detected having a position in 3D space meaning it would also have to have an orientation or angle as well, [0053] the signs are determined by the sign detector (generation unit) by mating the point cloud data to the sighting frustrum in the 3D coordinate set, because the sign regions are in a 3D point cloud set they must have both a position and an orientation, as shown in figure 5-8).
Regarding claim 15, Doria discloses; The information processing apparatus the one or more processors are further configured to perform: calculating a size of the designated target object in the target object model on the basis of the target area and the target object information, and in the outputting, outputting (Doria, [0028] the image segmentation controller (size calculation unit) segments the images from the vehicle and is able to extract characteristics of the objects (in this case a road sign) based on the area of the image the object is in, the characteristics can include the side, shaped and color of the object.).
Regarding claim 16, Doria discloses; The information processing apparatus one or more processors are configures to perform: in acquiring of the three-dimensional point cloud datagenerating the three-dimensional point cloud data on the basis of the image data (Doria, [0031] image sensors may acquire images of the surroundings, [0032] point cloud data may be generated from this).
Regarding claim 17, Doria discloses; The information processing apparatus the one or more hardware processors are configures to perform: in acquiring the three-dimensional point cloud data, acquiring from a three-dimensional sensor device, the three-dimensional point cloud data in which a correspondence relationship with a pixel position in the image data is defined (Doria, [0036] the system may include a point cloud analyzer, [0085] the system receives point cloud data associated with the region around the vehicle, [0026] the system uses the point cloud to find positions of the objects in 3D space, [0046] sign locations (target object location) may be matched to a pixel in the image data).
Regarding claim 18, Doria discloses;
A computer program product having a non-transitory computer-readable medium including instructions stored thereon, wherein the instructions, when executed by a computer, cause the computer to perform(Doria, [0076] the system includes a processor and a memory to collect and process data)
One or more hardware processors configured to perform; (Doria, [0076] the system includes a processor and a memory to collect and process data)
acquiring image data (Doria, [0031] image sensors (image acquisition unit) may acquire images of the surroundings);
(Doria, [0036] the system may include a point cloud analyzer, [0085] the system receives point cloud data associated with the region around the vehicle), each of the three-dimensional points representing a three-dimensional position of an object included in the image data (Doria, [0026] the system uses the point cloud to find positions of the objects in 3D space);
specifying a reference coordinate system representing a reference of a three-dimensional position (Doria, [0087] a processor (functionally equivalent to a coordinate specifying unit per [0160] of applicant’s specification) derives a frustrum from the image and a degree of overlap between the sign (the object) and the frustrum is generated, which indicates the frustrum is being used a 3D reference point,);
detecting from the image data, a two-dimensional target area in which a designated target object is included (Doria, [0045] the image analyzer (area detection unit) may use edge detection to determine the sign location in the image, because edge detection uses pixel features to determine pixels in the image belonging to the sign this would be analogous to determining the area in the image which the sign (target object) is located, further [0047] states that the system may also determine a subregion in the image which contains a sign (target object));
extracting from the three-dimensional point cloud data, extraction point cloud data representing a three-dimensional position of an object included in the target area (Doria, [0087] the processor (functionally equivalent to the extraction unit as described in [0160] of applicant’s specification) identifies association sets in the point cloud data to determine the sign point cloud coordinate candidates (3D position of the object) [0100] road segments are identified as being in specific geographic regions based on the image);
acquiring a target object model being information obtained by modeling a shape of a first portion (Doria, [0028] the image may be segmented to obtain the objects, the objects shape, size and textures can be extracted), the first portion being at least part of the designated target object (Doria, [0047] the image is divided into subregions, the subregions have a variety of shapes and the regions contain portions of sign (target object), [0052] the point cloud data consists of classified neighborhoods that have been designated as signs and their positions, where the neighborhoods/regions may be planar shapes);
generating target object information representing a position and an orientation of the target object model (Doria, [0053] the sign detector (generation unit) generates a set of possible location coordinates for a sign, [0026] each sign detected has 3D coordinates and degree of overlap with a sighting frustrum, since the position coordinates are in 3D the sign would be detected having a position in 3D space meaning it would also have to have an orientation or angle as well), each corresponding to a case where the target object model is arranged in a three-dimensional space to follow the designated target object (Doria, [0026] each sign detected (target object) has 3D coordinates and degree of overlap with a sighting frustrum);
outputting the target object information (Doria, [0026] the sign positions and associations are output the user and then displayed, (functionally equivalent to an output unit)), wherein
the image data is generated by a camera installed in a mobile entity traveling on a road (Doria, [0031] vehicle (124) may have an image sensor or camera to collect data and images of the surroundings),
the reference coordinate system has three coordinate axes (Doria [0053] the signs are determined by the sign detector (generation unit) by matching the point cloud data to the sighting frustrum in the 3D coordinate set), one of which is parallel to a lane marking on the road or is parallel to a moving direction of the camera calculated on a basis of a movement of the camera (Doria, [0087] the frustrum sighting point (the region in which the target object is detected/referenced) is determined based on properties of the camera and its position, [0050] the sign (target) object lies between two parallel planes of intersection defined by the focal distance and other distances determined by the image sensor, Figure 3 and figure 4 A-B show the determination of these planes and focal points based on the image, where in figure 3, road marking are depicted, as well as a sign (42), and in figure 4B where the planes are shown (D1 and D2), since these two planes are a focal distance plane determined by the vehicle’s camera, it can be interpreted that this would be parallel to a lane line when the focal is distance is from a camera on the vehicle’s front where the vehicle is traveling forward in the lane, as shown in figure 10, camera (115)),
PNG
media_image1.png
199
273
media_image1.png
Greyscale
(Doria Figure 4B)
PNG
media_image3.png
706
287
media_image3.png
Greyscale
(Doria, Figure 10)
the first portion of the designated target object is a plane and is parallel to one or two of the three coordinate axes in the reference coordinate system (Doria, [0052] the point cloud data includes multiple neighborhoods previously classified as being planar and having been designated as having a sign with the sign position noted, the road sign plane would be parallel to the vertical axis (Y axis) of the direction the vehicle travels in, where the vehicle’s direction is defined by a set of coordinates in 3D (xyz)),
the target object model is an equation of the plane (Doria, [0038] each neighborhood of points in the point cloud defined as having a sign (target object) is planar, and the planar neighborhood region can be defined by equations 1-3 in paragraph [0038] of Doria),
and the one or more hardware processors are further configured to perform (Doria, [0008] non-transitory computer readable media is use to execute code so that processors can perform tasks):
in generating of the target object information, constraining the plane represented by the equation in parallel with the coordinate axis being parallel to the first portion in the reference coordinate system (Doria, [0050] the location of the road sign lies between two parallel planes, [0038] where the location of the road sign is in a planar neighborhood region of the image, therefore the plane in which the sign lies (which is represented by the equations 1-3) would be constrained by the two parallel planes in [0050] of Doria, Figure 4B illustrates this),
generating an estimated plane minimizing a distance to each of the three-dimensional points included in the extraction point cloud data (Doria, [0065]-[0066] the sign detection control calculated focal overlap, which is the minimum value between the coverage value and parsimony values, it may be the output of a minimizing function which takes the smaller of the two values, [0062] where the coverage value is the portion of intersection between the sighting frustrum and the points in the neighborhood (region containing the target object, which is a set of 3D points) and [0064] the parsimony value indicates the fraction of points intersecting the two planes (the frustrum and the neighborhood), both measures are different indications of overlap of the estimated plane (sighting frustrum) and the extracted 3D points (planar neighborhood of points containing the target object), therefore minimizing these two values would minimize the overlap or distance between the two sets of planar data),
and generating the target object information representing a position and an orientation of the estimated plane that is generated (Doria, [0057] the overlap detection is used to verify the position of the sign associated with the position candidate (estimated plane) using the methods described in [0060]-[0065] as cited above, [0048] – [0050] the algorithm (analogous to the detection unit) uses a frustrum (a plane for detecting the object) which is a plane in 3D space giving it a position and an orientation, further the frustrum may be estimated as a plane extending in a given direction (orientation)).
Regarding claim 19, Doria discloses; An information processing method implemented by a computer, the method comprising:
acquiring image data (Doria, [0031] image sensors may acquire images of the surroundings);
acquiring three-dimensional point cloud data including three-dimensional points (Doria, [0085] the system receives point cloud data associated with the region around the vehicle), each of the three-dimensional points representing a three-dimensional position of an object included in the image data (Doria, [0026] the system uses the point cloud to find positions of the objects in 3D space);
specifying a reference coordinate system representing a reference of a three-dimensional position (Doria, [0087] a processor derives a frustrum from the image and a degree of overlap between the sign (the object) and the frustrum is generated, which indicates the frustrum is being used a 3D reference point,);
detecting from the image data, a two-dimensional target area in which a designated target object is included (Doria, [0045] the image analyzer may use edge detection to determine the sign location in the image, because edge detection uses pixel features to determine pixels in the image belonging to the sign this would be analogous to determining the area in the image which the sign (target object) is located, further [0047] states that the system may also determine a subregion in the image which contains a sign (target object));
extracting from the three-dimensional point cloud data, extraction point cloud data representing a three-dimensional position of an object included in the target area (Doria, [0087] the processor identifies association sets in the point cloud data to determine the sign point cloud coordinate candidates (3D position of the object) [0100] road segments are identified as being in specific geographic regions based on the image);
acquiring a target object model being information obtained by modeling a shape of a first portion (Doria, [0028] the image may be segmented to obtain the objects, the objects shape, size and textures can be extracted), the first portion being at least part of the designated target object (Doria, [0047] the image is divided into subregions, the subregions have a variety of shapes and the regions contain portions of sign (target object), [0052] the point cloud data consists of classified neighborhoods that have been designated as signs and their positions, where the neighborhoods/regions may be planar shapes);
generating target object information representing a position and an orientation of the target object model (Doria, [0053] the sign detector generates a set of possible location coordinates for a sign, [0026] each sign detected has 3D coordinates and degree of overlap with a sighting frustrum, since the position coordinates are in 3D the sign would be detected having a position in 3D space meaning it would also have to have an orientation or angle as well), each corresponding to a case where the target object model is arranged in a three-dimensional space to follow the designated target object (Doria, [0026] each sign detected (target object) has 3D coordinates and degree of overlap with a sighting frustrum);
outputting the target object information (Doria, [0026] the sign positions and associations are output the user and then displayed), wherein
the image data is generated by a camera installed in a mobile entity traveling on a road (Doria, [0031] vehicle (124) may have an image sensor or camera to collect data and images of the surroundings),
the reference coordinate system has three coordinate axes (Doria [0053] the signs are determined by the sign detector (generation unit) by matching the point cloud data to the sighting frustrum in the 3D coordinate set), one of which is parallel to a lane marking on the road or is parallel to a moving direction of the camera calculated on a basis of a movement of the camera (Doria, [0087] the frustrum sighting point (the region in which the target object is detected/referenced) is determined based on properties of the camera and its position, [0050] the sign (target) object lies between two parallel planes of intersection defined by the focal distance and other distances determined by the image sensor, Figure 3 and figure 4 A-B show the determination of these planes and focal points based on the image, where in figure 3, road marking are depicted, as well as a sign (42), and in figure 4B where the planes are shown (D1 and D2), since these two planes are a focal distance plane determined by the vehicle’s camera, it can be interpreted that this would be parallel to a lane line when the focal is distance is from a camera on the vehicle’s front where the vehicle is traveling forward in the lane, as shown in figure 10, camera (115)),
PNG
media_image1.png
199
273
media_image1.png
Greyscale
(Doria Figure 4B)
PNG
media_image2.png
728
296
media_image2.png
Greyscale
(Doria, Figure 10)
the first portion of the designated target object is a plane and is parallel to one or two of the three coordinate axes in the reference coordinate system (Doria, [0052] the point cloud data includes multiple neighborhoods previously classified as being planar and having been designated as having a sign with the sign position noted, the road sign plane would be parallel to the vertical axis (Y axis) of the direction the vehicle travels in, where the vehicle’s direction is defined by a set of coordinates in 3D (xyz)),
the target object model is an equation of the plane (Doria, [0038] each neighborhood of points in the point cloud defined as having a sign (target object) is planar, and the planar neighborhood region can be defined by equations 1-3 in paragraph [0038] of Doria),
and the one or more hardware processors are further configured to perform (Doria, [0008] non-transitory computer readable media is use to execute code so that processors can perform tasks):
in generating of the target object information, constraining the plane represented by the equation in parallel with the coordinate axis being parallel to the first portion in the reference coordinate system (Doria, [0050] the location of the road sign lies between two parallel planes, [0038] where the location of the road sign is in a planar neighborhood region of the image, therefore the plane in which the sign lies (which is represented by the equations 1-3) would be constrained by the two parallel planes in [0050] of Doria, Figure 4B illustrates this),
generating an estimated plane minimizing a distance to each of the three- dimensional points included in the extraction point cloud data (Doria, [0065]-[0066] the sign detection control calculated focal overlap, which is the minimum value between the coverage value and parsimony values, it may be the output of a minimizing function which takes the smaller of the two values, [0062] where the coverage value is the portion of intersection between the sighting frustrum and the points in the neighborhood (region containing the target object, which is a set of 3D points) and [0064] the parsimony value indicates the fraction of points intersecting the two planes (the frustrum and the neighborhood), both measures are different indications of overlap of the estimated plane (sighting frustrum) and the extracted 3D points (planar neighborhood of points containing the target object), therefore minimizing these two values would minimize the overlap or distance between the two sets of planar data),
and generating the target object information representing a position and an orientation of the estimated plane that is generated (Doria, [0057] the overlap detection is used to verify the position of the sign associated with the position candidate (estimated plane) using the methods described in [0060]-[0065] as cited above, [0048] – [0050] the algorithm (analogous to the detection unit) uses a frustrum (a plane for detecting the object) which is a plane in 3D space giving it a position and an orientation, further the frustrum may be estimated as a plane extending in a given direction (orientation)).
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. For a listing of analogous art as cited by the Examiner please see the attached PTO-892 Notice of References Cited.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JORDAN M ELLIOTT whose telephone number is (703)756-5463. The examiner can normally be reached M-F 8AM-5PM ET.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Emily Terrell can be reached at (571) 270-3717. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/J.M.E./Examiner, Art Unit 2666
/EMILY C TERRELL/Supervisory Patent Examiner, Art Unit 2666