Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 101
Claims 1-20 are rejected for being the abstract idea of a mental process. The claimed invention is directed to an abstract idea without significantly more. Claim(s) 1 and 11 recites "acquiring an original image including the companion animal; determining a first feature region corresponding to a face of the companion animal and a species of the companion animal by image processing on the original image; and detecting an object for identifying the companion animal by setting a second feature region within the first feature region based on the determined species of the companion animal, wherein detecting the object for identifying the companion animal comprises: selecting a detector corresponding to the determined species of the companion animal, among a plurality of detectors that are specialized for respective animal species and independent each other;
and setting the second feature region within the first feature region by using the selected detector,
and wherein the detector is an algorithm for detecting objects having various sizes with respect to one image in an artificial neural network.". This judicial exception is not integrated into a practical application because this is a mental process that is carried out by a generic computer (see MPEP 2016.04 "can be performed in the human mind, or by a human using pen and paper" to be an abstract idea. "Nor do courts distinguish between claims that recited mental processes performed by humans and claims that recite mental processes performed on a computer.) This judicial exception is not integrated into a practical application because the claimed process steps are drawn to generic processes capable of being carried out in the human mind and do not include additional elements that are sufficient to amount to significantly more than the judicial exception because the presently claimed method steps are nothing significantly more than pre and post solution activity of data gathering and display and output of data (see MPEP 2106.05 Insignificant Extra Solution Activity). Below is provided additional reasoning for each of the respective dependent claims.
Claims 2 and 12 recite feature regions and feature values but do not recite anything more than a mental process done on a generic computer. Feature region and feature value determination are insignificant pre and post-processing extra solution activities.
Claims 3 and 13 recite features of feature image generation and probability values but do not recite anything more than a mental process done on a generic computer. Probability values are mathematical expressions and are insignificant extra solution activities.
Claims 4 and 14 recite features of comparing feature values with reference values but do not recite anything more than a mental process done on a generic computer. Comparing values are mathematical expressions and are insignificant extra solution activities.
Claims 5 and 15 recite features of down sampling images but do not recite anything more than a mental process done on a generic computer. Downsampling images is an insignificant extra solution activity.
Claims 6 and 16 recite feature regions and feature values but do not recite anything more than a mental process done on a generic computer. Feature region and feature value determination are insignificant pre and post-processing extra solution activities.
Claims 7 and 17 recite features of up sampling images but do not recite anything more than a mental process done on a generic computer. Upsampling images is an insignificant extra solution activity.
Claims 8 and 18 recite features of setting probability values but do not recite anything more than a mental process done on a generic computer. Setting probability values are mathematical expressions and are insignificant extra solution activities.
Claims 9 and 19 recite features of comparing feature values with reference values but do not recite anything more than a mental process done on a generic computer. Comparing values are mathematical expressions and are insignificant extra solution activities.
Claims 10 and 20 recite features of feature image generation and probability values but do not recite anything more than a mental process done on a generic computer. Probability values are mathematical expressions and are insignificant extra solution activities.
Claims 1-20 are rejected under USC 101 as being drawn to an abstract idea.
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: communication module in claim 11.
Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 1-3, 6-8, 10-13, 16-18, 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Kim (US 20220036054 A1), and further in view of Chen (US 20210068371 A1).
Regarding claims 1 and 11, Kim discloses A method for detecting an object for identifying a companion animal, the method comprising ([0022] In an embodiment, the identification unit may be configured to identify the target companion animal by applying the entire facial image and the at least one sub-patch to a fourth model.):
acquiring an original image including the companion animal ([0055] The companion animal identification system 1 displays the screen including the preview image corresponding to the input image of the image acquisition unit 100 on the entire screen of the display unit 200 or a part of the screen.);
determining a first feature region and a species of the companion animal by image processing on the original image ("[0015] In an embodiment, the feature point may include at least one of a first feature point corresponding to a left eye, a second feature point corresponding to a right eye, or a third feature point corresponding to a nose.
[0062] In an embodiment, the view correction unit 300 may include an animal species recognition model (or referred to as a first model)."); and
detecting an object for identifying the companion animal within the first feature region based on the determined species of the companion animal ([0062] In an embodiment, the view correction unit 300 may include an animal species recognition model (or referred to as a first model). The animal species recognition model is a pre-trained machine learning model to extract the features from the input image and recognize the animal species of the companion animal in the input image based on the extracted features. The features extracted by the first model are features for recognizing the animal species of the companion animal and may be referred to as first features.),
a communication module that transmits an image of the object to a server when the object for identifying the companion animal is valid ([0051] The image acquisition unit 100 may include a camera module of a smartphone, but is not limited thereto, and may include a variety of devices capable of capturing an object and generating and transmitting image data, for example, digital cameras.
Fig. 2, segmentation (patch generation of nose, eyes, and a combination of the two).
[0147] In particular embodiments, the companion animal identification system 1 may be implemented as a server; and a client device disposed at a remote location. The client device includes the image acquisition unit 100, The server includes the identification unit 500, The view correction unit 300 may be included in the server or the client device.).
PNG
media_image1.png
414
596
media_image1.png
Greyscale
and setting the second feature region within the first feature region by using the selected detector ([0092] Referring to FIG. 4, the first to third feature points are extracted from the face image of the dog. Subsequently, a connecting line between the first feature point A and the second feature point B is formed, and when a projected location of the third feature point C onto the connecting line is included at the center of the connecting line, it is determined that the location of the nose is disposed at the center between the left eye and the right eye in the dog's face.),
and wherein the detector is an algorithm for detecting objects having various sizes with respect to one image in an artificial neural network ([0079] The face analysis model is trained using the third training dataset including a plurality of third training samples. Each of the plurality of third training samples includes a face image of an object, face region information of the object and feature point information. The face region information may include the location, region size and boundary of the face region in the image. The feature point information may include the location of the feature point, the size of the feature point and boundary in the image.).
Kim does not explicitly disclose wherein detecting the object for identifying the companion animal comprises: selecting a detector corresponding to the determined species of the companion animal, among a plurality of detectors that are specialized for respective animal species and independent each other.
In a similar field of endeavor of animal nose print identification, Chen teaches wherein detecting the object for identifying the companion animal comprises: selecting a detector corresponding to the determined species of the companion animal, among a plurality of detectors that are specialized for respective animal species and independent each other ("[0037] Then, the compared nose prints data, the compared body information, the compared face data, and the compared breed information are compared with the actual nose prints data, the actual body information, the actual face data, and the actual breed information respectively to obtain a comparison result. Thereafter, the identification unit 3 judges whether the image data of the animals matches with the animal identity data of the animals. The compared breed information is analyzed by using convolutional neural network (CNN) of Google Inception V3.
[0044] (C). selecting one of multiple animal identity data from a database, wherein the one animal identity data is actual nose prints data, actual body information, and actual face data of the animals, wherein the actual nose prints data, the actual body information, and the actual face data of the animals are compared with the compared nose prints data, the compared body information, and the compared face information respectively so as to distinguish whether the compared nose prints data, the compared body information, and the compared face data of the animals match with the animal identity data of the animals;
[0045] (D). outputting distinguishing results by using an output unit.").
It would have been obvious to one of ordinary skill in the art to combine Kim’s disclosure of animal feature recognition with Chen’s teaching of nose print detection in order to implement a method and a system for distinguishing identities based on nose prints of animals which are capable of enhancing identification accuracy greatly (0004 of Chen).
Regarding claims 2 and 12, Kim discloses applying first pre-processing to the original image ([0110] With the guide of FIG, 7, it is possible to acquire the preview image with an adjusted view to satisfy the first alignment condition.);
determining the species of the companion animal from the pre-processed image and setting the first feature region ([0155] In an embodiment, the step S300 includes recognizing a species and/or physical features of the target companion animal (S305). The step S305 may be performed based on any of a plurality of images when the plurality of images is acquired for the same object.); and
extracting a first feature value by first post-processing on the first feature region ("[0140] In an example, the matching scores for the class calculated by the classifier of the machine learning model is set to have a higher value with the increasing probability that the target companion animal in the input patch will belong to the corresponding class. The matching scores may be a probability value itself, but may be calculated by normalizing the probability value to a specific range of values (for example, 1 to 100).
[0141] In an embodiment, the determination part of the companion animal identification model combines the matching scores of the target companion animal for each patch, and calculates the final score of the target companion animal based on the combination of the matching scores of the target companion animal for the classes. The final scores may be combined for each same class. Accordingly, in the case of a plurality of classes, a set of final scores of the target companion animal may be acquired.").
Regarding claims 3 and 13, Kim discloses generating a plurality of feature images from the pre-processed image by using a learning neural network ("claim 4: The system according to claim 2, wherein the view correction unit includes a second model to which the preview image of the target companion animal is applied, the second model is a pre-trained machine learning model to extract second features from an input image and recognize physical features of the companion animal in the input image based on the extracted features, and is trained using a second training dataset including a plurality of second training samples, and each of the plurality of second training samples includes a face image of an animal and the physical features of the corresponding animal."):
applying a bounding box defined in advance to each of the plurality of feature images ([0079] The face analysis model is trained using the third training dataset including a plurality of third training samples. Each of the plurality of third training samples includes a face image of an object, face region information of the object and feature point information. The face region information may include the location, region size and boundary of the face region in the image. The feature point information may include the location of the feature point, the size of the feature point and boundary in the image.);
calculating a probability value for each type of companion animal within the bounding box ([0140] In an example, the matching scores for the class calculated by the classifier of the machine learning model is set to have a higher value with the increasing probability that the target companion animal in the input patch will belong to the corresponding class. The matching scores may be a probability value itself, but may be calculated by normalizing the probability value to a specific range of values (for example, 1 to 100).); and
forming the first feature region to include the bounding box when the calculated probability value for a specific animal species is equal to or greater than a reference value ("[0139] The matching scores indicates the matching extent between the target companion animal and the identifier of the class. The matching scores may be calculated through a variety of geometric similarities (for example, cosine similarity, Euclidean similarity) or a classifier of a machine learning model (for example, a CNN classifier).
[0140] In an example, the matching scores for the class calculated by the classifier of the machine learning model is set to have a higher value with the increasing probability that the target companion animal in the input patch will belong to the corresponding class. The matching scores may be a probability value itself, but may be calculated by normalizing the probability value to a specific range of values (for example, 1 to 100).
[0141] In an embodiment, the determination part of the companion animal identification model combines the matching scores of the target companion animal for each patch, and calculates the final score of the target companion animal based on the combination of the matching scores of the target companion animal for the classes. The final scores may be combined for each same class. Accordingly, in the case of a plurality of classes, a set of final scores of the target companion animal may be acquired.
[0142] When the matching results for each patch are calculated as scores by the matching part, the determination part is configured to calculate the final score of the target companion animal for a specific class based on the matching scores calculated for each patch.").
Regarding claims 6 and 16, Kim discloses applying second pre-processing to the first feature region for identifying the species of the companion animal ([0020] In an embodiment, the view correction unit may be configured to calculate a connection vector including first and second feature points corresponding to both eyes, calculate a rotation matrix (T) of the connection vector, and align the connection vector into a non-rotated state based on the calculated rotation matrix (T) of the connection vector.);
setting a second feature region for identifying the companion animal based on the species of the companion animal in the first feature region subjected to the second pre-processing ([0089] The view correction unit 300 may determine if the location of the nose is disposed at the center between the left eye and the right eye using a connecting line between the first and second feature points corresponding to the left/right eyes and the third feature point corresponding to the nose.); and
extracting a second feature value by applying second post-processing to the second feature region ("fig. 2, analyze dog breed by checking face region then aligns, then finds patches for eyes and nose. Finding the eyes and nose in a cropped face image requires higher resolution by applying object detection.
[0137] In another embodiment, when the matching results for each patch are calculated as scores by the matching part, the determination part is configured to calculate a final score of the target companion animal based on the matching scores calculated for each patch. The matching scores may be calculated based on data acquired by the feature extraction part and/or the matching part.").
Regarding claims 7 and 17, Kim discloses the second pre-processing on the first feature region is performed at a second resolution higher than a first resolution at which the first pre-processing for setting the first feature region is applied (fig. 2, analyze dog breed by checking face region then aligns, then finds eyes and nose. Finding the eyes and nose in a face image requires higher resolution by applying object detection.).
Regarding claims 8 and 18, Kim discloses wherein setting the second feature region comprises setting the second feature region based on a probability that the object for identifying the companion animal is located in the first feature region in accordance with the species of the companion animal ("[0079] Each of the plurality of third training samples includes a face image of an object, face region information of the object and feature point information. The face region information may include the location, region size and boundary of the face region in the image. The feature point information may include the location of the feature point, the size of the feature point and boundary in the image.
[0140] In an example, the matching scores for the class calculated by the classifier of the machine learning model is set to have a higher value with the increasing probability that the target companion animal in the input patch will belong to the corresponding class. The matching scores may be a probability value itself, but may be calculated by normalizing the probability value to a specific range of values (for example, 1 to 100).").
Regarding claims 10 and 20, Kim discloses generating feature region candidates for determining the species of the companion animal from the image ([0163] The companion animal identification method includes the step (S500) of generating at least one sub-patch from the entire facial image of the face image of the target companion animal having the aligned face view; and extracting features from the entire facial image and the at least one sub-patch and classifying the extracted features for each patch using an identifier which matches the target companion animal for each patch.), and
generating the first feature region having a location and a size that are determined based on a reliability value of each of the feature region candidates ("[0079] The face region information may include the location, region size and boundary of the face region in the image. The feature point information may include the location of the feature point, the size of the feature point and boundary in the image.
[0138] The matching part of the companion animal identification model may calculate the matching scores of the target companion animal for the class of each identifier based on the features extracted from the input patch. For example, for a first class, the matching part of the companion animal identification model may calculate a first matching scores of the target companion animal for the entire facial image; a second matching scores of the target companion animal for the first sub-patch, a third matching scores of the target companion animal for the second sub-patch and/or a fourth matching scores of the target companion animal for the third sub-patch.").
Claim(s) 4 and 14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Kim (US 20220036054 A1), in view of Chen (US 20210068371 A1), in view of Ko (WO2020075888A1), and further in view of SHMIGELSKY (WO 2022077113 A1).
Regarding claims 4 and 14, Kim does not explicitly disclose but in a similar field of endeavor of animal nose print recognition, Ko teaches the object for identifying the companion animal is detected when the first feature value is greater than a reference value (page 12: Meanwhile, referring to FIG. 3 again, if the position of the first region including the nose of the animal is recognized in the first image in step S230, the controller 130 may determine whether the preset condition is satisfied. Yes (S240).).
It would have been obvious to one of ordinary skill in the art to combine Kim and Chen’s disclosure of animal nose print recognition with Ko’s teaching of thresholding, in order to implement an animal registration system to make it easier to find the owners of the abandoned animals (page 1). Kim and Ko do not explicitly disclose or teach additional processing is omitted when the first feature value is smaller than the reference value.
In a similar field of endeavor of animal visual identification, SHMIGELSKY teaches additional processing is omitted when the first feature value is smaller than the reference value ([0247] The confidence score represents the accuracy that the animal identification is predicted correctly. The confidence score may be compared to a confidence score threshold to determine whether the predicted animal identification should be accepted or not.).
It would have been obvious to one of ordinary skill in the art to combine Kim, Chen, and Ko’s disclosure of thresholding with SHMIGELSKY’s teaching of additional processing, in order to simultaneously monitor one or more animals in a herd for tracking and managing various aspects of the tracked animals such as, but not limited to, their health, activity, and/or nutrition (0166).
Claim(s) 5 and 15 is/are rejected under 35 U.S.C. 103 as being unpatentable over Kim (US 20220036054 A1), in view of Chen (US 20210068371 A1), and further in view of YAMAMOTO (US 20210306586 A1).
Regarding claims 5 and 15, Kim discloses applying the first pre-processing to the image converted to the first resolution ([0113] With the guide of FIG. 8, it is possible to acquire the preview image having an adjusted view to satisfy the second alignment condition.).
Kim does not explicitly disclose but in a similar field of digital images, YAMAMOTO teaches converting the original image into an image having a first resolution lower than an original resolution ([0194] On the other hand, the resolution of the image when line thinning is performed is lower compared to the case where line thinning is not performed.).
It would have been obvious to one of ordinary skill in the art to combine Kim and Chen’s disclosure of animal nose print recognition with YAMAMOTO’s teaching of image conversion, in order to provide a recognition process as an image that can sufficiently provide information as an image for visual recognition in order to improve the recognition accuracy.
Claim(s) 9 and 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Kim (US 20220036054 A1), in view of Chen (US 20210068371 A1), and further in view of Ko (WO2020075888A1).
Regarding claims 9 and 19, Kim does not explicitly disclose but in a similar field of endeavor of animal nose print recognition, Ko teaches wherein when the second feature value is greater than a reference value (page 15: The control unit 130 may extract the feature points of the left and right nostrils again on the corrected first image. The controller 130 may recognize that the preset condition is satisfied when the feature points of the extracted left / right nostrils and the stored feature points are matched again. The above-described method of correcting and frontizing the first image is merely an example, and the present disclosure is not limited thereto.), an image including the second feature region is transmitted to a server (page 15: According to some embodiments of the present disclosure, the controller 130 may generate second image data including a region of interest within the first image. The second image may be transmitted to the server and used to obtain object information of the photographed animal.).
It would have been obvious to one of ordinary skill in the art to combine Kim and Chen’s disclosure of animal nose print recognition with Ko’s teaching of thresholding, in order to implement an animal registration system to make it easier to find the owners of the abandoned animals (page 1).
Response to Arguments
Applicant's arguments filed 01/13/2026 have been fully considered but they are not persuasive. Regarding the argument that the 101 rejection is overcome by the new limitations, the examiner most respectfully disagrees. The new limitations is still a generic process that can be done by a computer. EXAMINER’S SUGGESTION: The examiner appreciates the amendments made as well as the remarks provided. The examiner suggests amending the independent claims to include a cascaded detection architecture which may overcome the 101 rejection.
Applicant’s arguments, see pages 5-10, filed 01/13/2026, with respect to the rejection(s) of claim(s) 1 and 11 under U.S.C. 102 (a)(2) have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of Cheng (US 20210068371 A1).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
US20210089763A1 to claim 1: [0051] It is contemplated that the locations, sizes, and shapes of the nostrils 12 and other elements of the image 10 may depend on the animal type and breed. The reference point identifier 130 may, for example, detect nostril locations, possibly with the help of animal type and breed information, and determine two circular or ellipsoid regions representing the nostrils 12.
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to AHMED A NASHER whose telephone number is (571)272-1885. The examiner can normally be reached Mon - Fri 0800 - 1700.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Andrew Moyer can be reached at (571) 272-9523. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/AHMED A NASHER/Examiner, Art Unit 2675
/ANDREW M MOYER/Supervisory Patent Examiner, Art Unit 2675