DETAILED ACTION
Notice of Pre-AIA or AIA Status
1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Oath/Declaration
2. The receipt of Oath/Declaration is acknowledged.
Information Disclosure Statement
3. The information disclosure statements (IDS) submitted on 08/31/2023, 01/09/2024, 08/13/2024, and 10/24/2024 are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statements are being considered by the examiner.
Drawings
4. The drawing(s) filed on 02/17/2023 are accepted by the Examiner.
Election/Restrictions
5. Applicant’s election without traverse of Species I (Claims 1-12) in the reply filed on 12/18/2025 is acknowledged.
6. Claims 13-32 are withdrawn from further consideration pursuant to 37 CFR 1.142(b) as being drawn to a nonelected species, there being no allowable generic or linking claim. Election was made without traverse in the reply filed on 12/18/2025.
Status of Claims
7. Claims 1-32 are pending in this application.
Claims 13-32 are withdrawn from consideration.
Claims 1-12 are currently presented for examination.
Claim Rejections - 35 USC § 103
8. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
9. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
10. The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action.
11. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
12. Claims 1-6, and 8-9, and 11 are rejected under 35 U.S.C. 103 as being unpatentable over Begun et al. (US 2021/0044864) in view of Simhadri et al. (US 2021/0209734), hereinafter ‘Begun’ and ‘Simhadri’.
Regarding Claim 1:
Begun discloses a method for content recognition, the method comprising:
Begun describes a ‘method and apparatus for identifying video content based on biometric features of characters’; (Begun: ¶[0002]).
sampling a source content for performing content recognition;
Begun teaches receiving video content and extracting a frame from the received video content for subsequent analysis/recognition steps (e.g., sampling the source content); (Begun: Fig. 5 flowchart: ‘receive video content’ at S510; ‘extract a frame’ at S520; ¶[0061]).
detecting content elements from the sampled source content; and
Begun teaches detecting biometric features (e.g., facial/biometric features) from the extracted frame. These biometric feature-bearing regions correspond to content elements (e.g., persons/faces) detected from the sample source content; (Begun: Fig. 5 flowchart: ‘detect biometric features from extracted frame’ at S530; ¶[0062]).
identifying the detected content elements,
Begun further teaches identifying characters based on the detected biometric features, which constitute identifying the detected content elements. Figure 2 of Begun shows wherein faces from a video frame are detected and the actors in the movie are identified, e.g., George Clooney, Brad Pitt and Matt Damon are identified; (Begun: ¶[0058]).
Begun does not expressly disclose wherein detecting content elements from the sampled source content comprises: detecting content elements using an element detection model; and generating bounding boxes over each detected content element.
Simhadri discloses wherein detecting content elements from the sampled source content comprises: detecting content elements using an element detection model; and generating bounding boxes over each detected content element.
Simhadri teaches a ‘detection component 112’ in Fig. 1 (i.e., a model-driven detector) that detects a person in a frame of a video stream using e.g. YOLOv3; (Simhadri: ¶[0003]; ¶¶[0035-0036]; ¶[0077]; ¶[0080]). Simhadri further discloses generating a bounding box around a person, or the face of a person, or any identifiable features of the human body; (Simhadri: Figs. 4-6; ¶[0028]; ¶[0081]).
Begun teaches the overall content recognition workflow from sampled video content, including extracting a frame and detecting and identifying content related elements (e.g., person/characters) (Begun: ¶¶[0061-0066]). Simhadri teaches a well-known, compatible implementation detail for the detection step, namely using a detection component that generates bounding boxes around detected content elements in a frame (Simhadri: ¶[0003]; ¶[0028]).
Begun in view of Simhadri are combinable because they are from the same field of endeavor of image processing (identifying characters in a video). It would have been obvious to one of ordinary skill in the art before the effective filing date to incorporate Simhadri’s bounding box-based detection into Begun’s content recognition pipeline to improve localization and segmentation of detected elements within sampled frames and to standardize downstream identification processing (e.g., feature extraction and matching). The suggestion/motivation for doing so is to localize detected objects/persons with bounding boxes to support recognition and identification. Accordingly, Claim 1 is unpatentable over Begun in view of Simhadri.
Regarding Claim 2:
The proposed combination of Begun in view of Simhadri further discloses the method of claim 1, wherein identifying the detected content elements comprises:
for each detected content element;
performing alignment over each bounding box;
Simhadri discloses detecting persons within image data and generating bounding boxes corresponding to detected persons, wherein the region defined by each bounding box is cropped, resized, and formatted prior to further processing by downstream models (Simhadri: ¶[0088]). Such cropping, resizing, and normalization of a detected region constitutes alignment over each bounding box, as it standardizes the detected content element for subsequent analysis.
performing quality analysis over the aligned bounding boxes to generate analysis scores, each analysis score being associated with a detected content element; and
Simhadri further discloses that the detection process outputs a confidence score for each detected bounding box, the confidence score indicating the likelihood that the bounding box contains a valid person detection (Simhadri: ¶[0076]). The confidence score is generated per detected bounding box and therefore constitutes an analysis score associated with a detected content element, as recited.
performing matching on each detected content element associated with any analysis score meeting or exceeding a scoring threshold.
Simhadri teaches using the confidence score as a basis for determining whether a detected content element is accepted for further processing (Simhadri: ¶0076]), which corresponds to applying a scoring threshold.
Once the detected content element satisfies the scoring threshold, Begun teaches performing feature-based matching and classification of the detected content element, including calculating distances between extracted biometric feature vectors and grouping or classifying detected elements on a per-identify base (Begun: ¶¶[0098-0099,¶[0093]).
Thus, the combined disclose performing matching only on detected content elements whose associated analysis scores meet or exceed a threshold, as claimed.
Begun in view of Simhadri are combinable because they are from the same field of endeavor of image processing (identifying characters in a video). It would have been obvious to one of ordinary skill in the art before the effective filing date to combine Simhadri’s bounding box alignment and confidence score thresholding with Begun’s feature based matching and classification techniques. The suggestion/motivation for doing so is to improve recognition accuracy and computational efficiency by filtering low-quality detections before performing matching. Accordingly, Claim 2 is unpatentable over Begun in view of Simhadri.
Regarding Claim 3:
The proposed combination of Begun in view of Simhadri further discloses the method of claim 2, wherein performing matching on each detected content element comprises:
extracting embedding associated with the content element;
Begun expressly teaches generating feature embeddings (biometric feature vectors) from detected content elements for use in downstream analysis and classification (Begun: ¶[0093]), describing use of embedding based techniques such as t-SNE for representing biometric features).
matching the extracted embedding against stored embeddings to locate an identity of the element; and
Begun further discloses calculating distances between extracted biometric feature embeddings and other stored biometric features in order classify, group, or associate detected elements on a per person (identity) basis (Begun: ¶¶[0098-0099]). Such distance-based comparison of embeddings against a stored embeddings constitutes matching the extracted embedding to locate an identity.
outputting the located identity as an identity of the content element.
Begun teaches assigning detected biometric features to an identified person or group and using that classification as the recognized identity of the detected content element (Begun: ¶[0099]), which corresponds to outputting the located identity as claimed.
Accordingly, Claim 3 is unpatentable over Begun in view of Simhadri.
Regarding Claim 4:
The proposed combination of Begun in view of Simhadri further discloses the method of claim 3, wherein matching the extracted embedding against the stored embeddings to locate an identity of the content element is performed on a server.
Simhadri discloses wherein matching the extracted embedding against the stored embeddings to locate an identity of the content element is performed on a server.
Begun discloses matching detected facial content features against stored reference data to identify a content element. Specifically Begun teaches a Face Recognition Engine (Begun: Fig. 2 S210 ‘Face Detection Engine’) that compares detected face information against a ‘faces database’ (Begun: Fig. 2; ¶[0058]) to determine an identity of the detected content element (e.g., actor identification). See Fig. 2 (face detection engine – face recognition engine – faces database – identified actors), and ¶¶[0098-0099], which describe calculating distances between biometric features and classify and grouping features on a person basis to identify individuals.
Simhadri compliments Begun by teaching a recognition pipeline in which detected and qualified content elements are processed by downstream recognition functionality once they satisfy confidence thresholds, reinforcing that recognition matching is a discrete processing stage suitable for modular deployment; (Simhadri: ¶¶[0076-0088]).
Simhadri expressly teaches performing processing in a server-based architecture, wherein computationally intensive operations and reference data are maintained and executed on a server rather than exclusively on a client device. See Simhadri, system architecture discussion describing server-side processing in communication with client devices (e.g., server performing core processing used stored data); (Simhadri: ¶[0163]; ¶[0154]; ¶[0169]; ¶¶[0173-0175]; Fig. 25).
Begun in view of Simhadri are combinable because they are from the same field of endeavor of image processing (e.g., identifying characters in a video).
It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to perform the matching of extracted embeddings against stored embeddings in Begun on a server, as taught by Simhadri. The suggestion/motivation for doing so is that centralizing the matching process on a server improves computational efficiency, enables shared access to large reference libraries, and facilitates maintenance and updating of stored embeddings, all of which improve design considerations in content recognition. Accordingly, it would have been obvious to combine Begun with Simhadri to arrive at the subject matter of claim 4.
Regarding Claim 5:
The proposed combination of Begun in view of Simhadri further discloses the method of claim 1, further comprising:
for each of the identified content elements:
searching for at least one matching work associated with the identified content element;
Begun explains identifying characters in a sampled video and using those identified characters to identify the video content by matching against a content-specific character database/character list (e.g., locating video content works associated with the identified character(s)); (Begun: ¶[0007]).
Begun further describes operations in which a first character is identified and used for identifying the video content (with the identification relying on stored character information for video content); (Begun: ¶¶[0061-0066]; ¶¶[0070-0073]; Fig. 6).
grouping the at least one matching work with the identified content element into a set of works associated with the identified content element; and
Begun’s approach of identifying video content by matching identified character(s) to stored character lists for respective video contents necessarily yields, for a given identified character, a collection of candidate video content items in the database associated with that character (i.e., a “set” of works associated with the identified content element, as claimed); (Begun: ¶[0007], ¶¶[0070-0075]; Fig. 6).
determining whether an intersecting work exists between the sets of works.
Simhadri expressly discloses evaluating overlap between result sets using an intersection operation (e.g., intersection over union and set based overlap determinations), which teaches determining whether an intersecting item exists between multiple sets derived from detected elements; (Simhadri: ¶[0082]).
Begun in view of Simhadri are combinable because they are from the same field of endeavor of image processing (identifying characters in a video). It would have been obvious to one of ordinary skill in the art before the effective filing date to incorporate Simhadri’s explicit set intersection determination into Begun’s multi-element candidate work aggregation framework. The suggestion/motivation for doing so is to resolve a single source work common to multiple detected content elements and to improve the identification accuracy and reduce ambiguity when multiple element specific candidate sets are present. Accordingly, Claim 5 is unpatentable over Begun in view of Simhadri.
Regarding Claim 6:
The proposed combination of Begun in view of Simhadri further discloses the method of claim 5, further comprising:
finding one intersecting work between the sets of works, and outputting the intersecting work as an identity of the source content.
Begun further discloses that, after comparing the per-element lists of candidate works and identifying common candidate content, the system proceeds with content identification/output based on the identified matching content. In particular, Begun describes identifying the video content that includes multiple identified characters after comparing the lists associated with those characters (i.e., narrowing to the common works) and thereby determining the identified content; (Begun: Fig. 2 ‘bottom left wherein it is determined from all the identified actors that the video frames are from Ocean’s Eleven’; ¶¶[0068-0069] ‘comparing lists corresponding to each character and identifying the video content including two or more characters), and (Begun: ¶[0070] ‘continuing the method flow after the identification step at S540’).
This corresponds to claim 6’s requirement of finding one intersecting work (i.e., a single common work after comparison) and outputting the intersecting work as an identity of the source content, because the common result of the per-element lists is used as the identified content output; (Begun ¶¶[0068-0069]).
Begun explicitly discloses identifying (and thereby outputting/returning) the video content (the source content) based on matching identified characters to stored character lists for video contents in a database, e.g., the workflow resolves to the video content work that satisfies the identified character matching constraints; (Begun: ¶0007], ¶¶[0070-0071]).
Accordingly, Claim 6 is unpatentable over Begun in view of Simhadri.
Regarding Claim 8:
The proposed combination of Begun in view of Simhadri further discloses the method of claim 5, further comprising:
finding more than one intersecting work among the sets of works:
Begun’s approach of identifying video content by matching identified character(s) to stored character lists for respective video contents necessarily yields, for a given identified character, a collection of candidate video content items in the database associated with that character (i.e., a “set” of works associated with the identified content element, as claimed); (Begun: ¶[0007], ¶¶[0070-0075]; Fig. 6).
Simhadri expressly discloses evaluating overlap between result sets using an intersection operation (e.g., intersection over union and set based overlap determinations), which teaches determining whether an intersecting item exists between multiple sets derived from detected elements; (Simhadri: ¶[0082]).
detecting an additional content element from the sampled source content;
Simhadri describes ML based detection of people and objects in frames and continuing processing based on detections (object and person detection context). See, e.g., Simhadri ¶[0082] (YOLO based detector), which supports the art recognized approach of detecting additional elements as needed from visual content.
identifying the detected additional content element;
Begun describes identifying corresponding characters based on detected biometric features, i.e., identification of detected persons and characters. See Begun ¶[0062] (identifying corresponding characters from biometric features) and ¶[0063] (repeat the same character- based process on another frame).
searching for at least one additional matching work associated with the identified additional content element;
Begun uses identified characters (e.g., actors and actresses) with content databases (e.g., IMDB) to identify video content based on identified elements. See Begun ¶[0061] (use content specific character database to identify video content) and ¶[0062] (explicitly referencing IMDB type sources in the identification workflow).
grouping the at least one additional matching work associated with the identified additional content element into an additional set of works; and
Begun teaches maintaining lists/databases of identified and non-identified biometric features and character lists associated with content identification workflows (i.e., aggregating candidate identity evidence for later matching, including using a Videos List/Short List during detection and identification.
determining whether an intersecting work exists between the sets of works and the additional set of works.
Begun teaches that identification is performed “based on” matching identified characters against character lists for video content, and when not resolved repeats on another frame to reach identification (i.e., iterative narrowing to a resolvable identification).
Begun in view of Simhadri are combinable because they are from the same field of endeavor of image processing (identifying characters in a video). It would have been obvious to one of ordinary skill in the art before the effective filing date to incorporate Simhadri’s ML based detection of people and objects in frames into Begun’s multi-element candidate work aggregation framework. The suggestion/motivation for doing so is to resolve a single source work common to multiple detected content elements and to improve the identification accuracy and reduce ambiguity when multiple element specific candidate sets are present. Accordingly, Claim 8 is unpatentable over Begun in view of Simhadri.
Regarding Claim 9:
The proposed combination of Begun in view of Simhadri further discloses the method of claim 8, further comprising:
finding one intersecting work among the sets of works and the additional set of works, and outputting the intersecting work as an identity of the source content.
As disclosed in the rejection of Claim 8 the sets of works and the additional set of works are narrowed down to a Singular Video, from a List of Videos (Begun: ¶[0007], ¶¶[0070-0075]; Fig. 6 and bottom left of Fig. 24).
Accordingly, Claim 9 is unpatentable over Begun in view of Simhadri.
Regarding Claim 11:
The proposed combination of Begun in view of Simhadri further discloses the method of claim 1, wherein the source content comprises an audio, visual, or audiovisual work.
Begun teaches wherein “A frame may include not only image data but also acoustic data.” (Begun: Fig. 6 Steps S510, S610 and S620 ¶[0071]).
Accordingly, Claim 11 is unpatentable over Begun in view of Simhadri.
13. Claims 7 and 10 are rejected under 35 U.S.C. 103 as being unpatentable over Begun in view of Simhadri as applied to claims 5 and 8 above, and further in view of Shen et al. (US 2021/0117691), hereinafter ‘Shen’.
Regarding Claim 7:
The proposed combination of Begun in view of Simhadri further discloses the method of claim 5, comprising:
finding no intersecting work between the sets of works,
Begun explains that when the system fails to identify the content element based on the current frame sample, the identification attempt is unsuccessful and the process does not resolve to a determinative identity at that stage (Begun: ¶[0113]). This corresponds to the claimed condition where the current candidate resolution attempt yields no successful intersection of results.
Note that Simhadri expressly uses “intersection” terminology (intersection over union) in connection with bounding boxes and object detection; (Simhadri: ¶[0082]).
restarting content sampling process.
Begun expressly teaches performing the identification process on another frame when identification is not achieved, thereby restarting the sampling and analysis process with new content; (Begun: ¶[0113]).
Simhadri also discloses accessing and operating on video frames as discrete samples, supporting repeated sampling from source content after an unsuccessful attempt; (Simhadri: ¶¶[0059-0060]).
Begun in view of Simhadri do not expressly disclose discarding the sets of works.
Shen discloses discarding the sets of works.
Shen teaches discarding frames or samples that are not analyzed or not useful for continued processing, i.e., abandoning a non-productive attempt rather than carrying it forward. In the context of claim 5’s grouping of candidate works, this teaches discarding the current sets when they do no resolve the identity; (Shen ¶[0032]).
Begun, Simhadri & Shen are combinable because they are from the same field of endeavor of image processing (e.g., identifying characters in a video). It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to discard a non-resolving candidate attempt and restart sampling with new content, as taught by Begun and Shen, using routine frame-based processing as taught by Simhadri. The suggestion/motivation for doing so is to improve robustness and efficiency when the current attempt fails to yield a determinative identification. Accordingly, it would have been obvious to combine Begun, Simhadri, and Shen to arrive at the subject matter of claim 7.
Regarding Claim 10:
The proposed combination of Begun in view of Simhadri further discloses the method of claim 8, further comprising:
finding no intersecting work between the sets of works,
Begun explains that when the system fails to identify the content element based on the current frame sample, the identification attempt is unsuccessful and the process does not resolve to a determinative identity at that stage (Begun: ¶[0113]). This corresponds to the claimed condition where the current candidate resolution attempt yields no successful intersection of results.
Note that Simhadri expressly uses “intersection” terminology (intersection over union) in connection with bounding boxes and object detection; (Simhadri: ¶[0082]).
restarting content sampling process.
Begun expressly teaches performing the identification process on another frame when identification is not achieved, thereby restarting the sampling and analysis process with new content; (Begun: ¶[0113]).
Simhadri also discloses accessing and operating on video frames as discrete samples, supporting repeated sampling from source content after an unsuccessful attempt; (Simhadri: ¶¶[0059-0060]).
Begun in view of Simhadri do not expressly disclose discarding the sets of works.
Shen discloses discarding the sets of works.
Shen teaches discarding frames or samples that are not analyzed or not useful for continued processing, i.e., abandoning a non-productive attempt rather than carrying it forward. In the context of claim 8’s grouping of candidate works, this teaches discarding the current sets when they do no resolve the identity; (Shen ¶[0032]).
Begun, Simhadri & Shen are combinable because they are from the same field of endeavor of image processing (e.g., identifying characters in a video). It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to discard a non-resolving candidate attempt and restart sampling with new content, as taught by Begun and Shen, using routine frame-based processing as taught by Simhadri. The suggestion/motivation for doing so is to improve robustness and efficiency when the current attempt fails to yield a determinative identification. Accordingly, it would have been obvious to combine Begun, Simhadri, and Shen to arrive at the subject matter of claim 10.
14. Claim 12 is rejected under 35 U.S.C. 103 as being unpatentable over Begun in view of Simhadri as applied to claim 1 above, and further in view of Zadeh et al. (US 11,914,674), hereinafter ‘Zadeh’.
Regarding Claim 12:
The proposed combination of Begun in view of Simhadri further discloses the method of claim 1, wherein the content element is a person, vehicle,
Begun teaches detecting biometric features (e.g., facial/biometric features) from the extracted frame. These biometric feature-bearing regions correspond to content elements (e.g., persons/faces) detected from the sample source content; (Begun: Fig. 5 flowchart: ‘detect biometric features from extracted frame’ at S530; ¶[0062]).
Begun expressly discloses sampling audiovisual source content and performing object detection on sampled frames (Begun: ¶¶[0022-0028], and ¶¶[0031-0036] and Fig. 2)
Begun further discloses wherein the biometric features include ‘faces, voices, gaits, ears, and hand shapes’ Fig. 1).
Simhadri describes ML based detection of people and objects in frames and continuing processing based on detections (object and person detection context). See, e.g., Simhadri ¶¶[0078-0082] (YOLO based detector), which supports the art recognized approach of detecting additional elements as needed from visual content, such as logos (¶0051]).
Note that it is well-known class list for a standard YOLO dataset to include ‘person, vehicle, plant, animals, clothing, sign (such as traffic light or stop sign), textual or numeric information, landmark locations, posters (which correspond to visual works).
Begun in view of Simhadri do not expressly disclose wherein the content element is a building, city, geographic feature, slogan, symbol, word, jingle, brand, trade name, trademark.
Zadeh discloses wherein the content element is a building, city, geographic feature, slogan, symbol, word, jingle, brand, trade name, trademark.
Zadeh discloses intelligent recognition of names, patterns, and semantic constructs, which correspond to brand names (Zadeh: Col. 109, lines 49-67), trade names (Zadeh: Col. 109, lines 49-67), trademarks (Zadeh: Col. 109, lines 49-67), slogans (Zaheh: Col. 105, lines 1-14), words (Zadeh: Col. 261 lines 62 – Col. 262, line 20), symbol (Zadeh: Col. 201, lines 37-49) and jingles (Zadeh: Col. 105, lines 1-14).
Zadeh discloses recognition of structured semantic entities and contextual constructs, which encompass buildings (Zadeh: Col. 14, lines 2-26), cities (Zadeh: Col. 212 line 56 – Col. 213, line 4), geographic entities (Zadeh: Col. 200, lines 19-44), landmarks (Zadeh: Col. 14, lines 2-26) and location-based identifiers (Zadeh: Col. 14, lines 2-26).
Begun, Simhadri & Zadeh are combinable because they are from the same field of endeavor of image processing (e.g., identifying objects in a video). It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to incorporate Zadeh’s semantic name and pattern recognition capabilities into the object detection and content recognition framework of Begun and Simhadri. The suggestion/motivation for doing so is to broaden the range of detectable content elements beyond purely visual object classes to include semantic identifiers such as brand names, trademarks, geographic entities, and structured concepts to improve robustness and coverage of content element identification across audiovisual content. Accordingly, it would have been obvious to combine Begun, Simhadri, and Zadeh to arrive at the subject matter of claim 12.
Conclusion
15. The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Lakhani et al. (US 2016/0358632) describes a method and system can generate video content from a video. The method and system can include generating audio files and image files from the video, distributing the audio files and the image files across a plurality of processors and processing the audio files and the image files in parallel. The audio files associated with the video to text and the image files associated with the video to video content can be converted. The text and the video content can be cross-referenced with the video.
16. Any inquiry concerning this communication or earlier communications from the examiner should be directed to NEIL R MCLEAN whose telephone number is (571)270-1679. The examiner can normally be reached Monday-Thursday, 6AM - 4PM, PST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Akwasi M Sarpong can be reached at 571.270.3438. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/NEIL R MCLEAN/Primary Examiner, Art Unit 2681