DETAILED ACTION
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
This action is in response to claim amendments and remarks filed by Applicant’s representative on January 7, 2026. Claims 2-21 are pending, no claims have been canceled, and no new claims have been added.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
Claim(s) 16, 17, 19, 21 is/are rejected under 35 U.S.C. 112(f) as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor regards as the invention.
Claim elements of claim 16 reciting in part “means for receiving, over a network, a first video stream…”, “means for using one or more object detection algorithms…”, “means for using one or more layout analysis algorithms…”, “means for extracting one or more word groups…”, means for associating with each word group in the one or more word groups a timestamp…”, means for integrating the word group with a text-based transcript…”, and “means for generating a response based on the one or more word groups and respective timestamps…”
Claim elements of claim 17 reciting in part “means for using one or more object detections algorithms…”, “means for using a content classification algorithm to generate…”, “means for generating an image of the graphic…”, and “means for storing the image of the graphic…”
Claim elements of claim 19 reciting in part “means for using one or more gesture recognition algorithms…”, and
Claim elements of claim 21 reciting in part “means for using an emotion detection algorithm…”, “means for storing the textual description of the detected emotion and the timestamp…” -- are limitations that invoke 35 U.S.C. 112(f). However, the written description fails to disclose the corresponding structure, material, or acts for the claimed function. The specification and/or drawings fails to expressly state or show the ‘corresponding hardware structure’ for the claimed elements above, and states only that embodiments of the invention can be implemented partially as hardware and/or a computer program/software.
Applicant may:
(a) Amend the claim so that the claim limitation will no longer be interpreted as a limitation under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph; or
(b) Amend the written description of the specification such that it expressly recites what structure, material, or acts perform the claimed function, without introducing any new matter (35 U.S.C. 132(a)).
If applicant is of the opinion that the written description of the specification already implicitly or inherently discloses the corresponding structure, material, or acts so that one of ordinary skill in the art would recognize what structure, material, or acts perform the claimed function applicant should clarify the record by either:
(a) Amending the written description of the specification such that it expressly recites the corresponding structure, material, or acts for performing the claimed function and clearly links or associates the structure, material, or acts to the claimed function, without introducing any new matter (35 U.S.C. 132(a)); or
(b) Stating on the record what the corresponding structure, material, or acts, which are implicitly or inherently set forth in the written description of the specification, perform the claimed function. For more information, see 37 CFR 1.75(d) and MPEP §§ 608.01(o) and 2181.
If applicant does not intend to have the claim limitation(s) treated under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112 , sixth paragraph, applicant may amend the claim(s) so that it/they will clearly not invoke 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, or present a sufficient showing that the claim recites/recite sufficient structure, material, or acts for performing the claimed function to preclude application of 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
For more information, see MPEP § 2173 et seq. and Supplementary Examination Guidelines for Determining Compliance With 35 U.S.C. 112 and for Treatment of Related Issues in Patent Applications, 76 FR 7162, 7167 (Feb. 9, 2011).
Claim Objections
Claim(s) 3-7, 10-14 & 17-21 is/are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Response to Amendments and Remarks
Applicant’s latest filed claim amendments and corresponding remarks dated January 7, 2026 have been received and fully considered. Applicant’s remarks and/or comments are generally directed to the current claim amendment(s), and accordingly deemed moot in light of the new grounds of rejection provided with this action.
With regards to Applicant’s latest amendments and remarks, Applicant firstly notes and remarks that the independent claim(s), and particularly independent claim 1, has been further amended to now expressly recite
“A system to derive a digital representation of an online meeting using contextual data inferred from non-verbal communications, the system comprising:
a processor; and
a memory storage device storing instructions thereon, which, when executed by the processor, cause the system to perform operations comprising:
receiving, over a network, a first video stream from a client computing device of a first meeting participant, the first video stream representing content presented via display or application interface and shared by the first meeting participant with one or more other meeting participants via a content sharing feature of the online meeting;
using one or more object detection algorithms to process the first video stream to detect one or more regions of interest, each detected region of interest depicting a collection of text; and
for each detected region of interest depicting a collection of text
:using one or more layout analysis algorithms to process the collection of text to identify a structure for the collection of text;
extracting one or more word groups from the collection of text based on the identified structure for the collection of text; and
associating with each word group in the one or more word groups a timestamp indicating the time during the online meeting when the collection of text, from which the word group was extracted, was shared;
for each of the one or more word groups, integrating the word group with a text-based transcript generated from a speech-to-text algorithm, wherein the integration of the word group with corresponding spoken dialogue within the transcript based on the timestamp associated with the word group creates a chronological sequence of both verbal and non-verbal communications for the online meeting, and further annotating the integrated word group to include i) the timestamp for the word group, and ii) information identifying the first meeting participant; and
responsive to a query from an end-user of a meeting analyzer service, generating a response based on the one or more word groups and respective timestamps, wherein the generating of the response involves providing as input to a generative language model a text-based prompt comprising an instruction derived from at least the query from the end-user”.
With respect to the above, Applicant notes and remarks that none of the prior art reference(s) applied in rejecting independent claim 1 [Shepherd et al], either individually or in combination with other prior art disclosures, expressly and properly discloses or suggests the above amended claim feature(s) or limitation(s) of receiving, over a network, a first video stream from a client computing device of a first meeting participant, the first video stream representing content ‘presented via display or application interface and shared by the first meeting participant with one or more other meeting participants via a content sharing feature of the online meeting’ -- as currently recited by amended independent claim 2 above (and similarly in independent claims 9 & 16). In particular, Applicant states or remarks that the current prior art of record does not properly disclose analysis of application-generated content shared via content sharing features for the purpose of extracting ad integrating content into meeting transcripts {meeting summaries / recordings} [Applicant Remarks: par 2, pg. 12 – par 1, pg. 13]. Accordingly, Applicant remarks that the independent claims are distinguishable from the applied prior art and/or prior art combination(s) used to reject the claims, and that the respective dependent claims are also distinguishable by virtue of their dependency on their respective parent independent claims.
However, in response to Applicant’s amended feature and associated remarks, the Office asserts and notes that the newly amended feature(s) are now expressly taught or disclosed in view of teachings and/or disclosures by at least Springer et al, as discussed / cited below with this action.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 2, 8, 9, 15, 16 is/are rejected under 35 U.S.C. 103 as being unpatentable over Shepherd et al (hereinafter Shepherd), US Patent Publication 20230237837 A1 (filing date January 2022) in view of Dinicola et al (hereinafter Dinicola), US Patent Publication 20100253689 A1 (publication date October 2010) and in further view of Master Ben-Dor et al (hereinafter Ben-Dor), US Patent Publication 20220414130 A1 (publication date December 2022) and in further view of Springer et al (hereinafter Springer), US Patent Publication 20220414130 A1 (filing date June 2022).
As per claims 2, 9, 16, Shepherd discloses particular recited feature(s) of the invention, such as a system (Shepherd: e.g., Computer 1202) [0152, Fig. 12] to derive a digital representation of an online meeting (Shepherd: e.g., Online sessions such as ‘multi-participant Video Meetings) [0001] using contextual data inferred from non-verbal communications (Shepherd: e.g., discloses as his invention ‘a system that can determine, from a video of an online session, respective bounding boxes of ‘text names’ of people, wherein the text names are presented in the video, and wherein images of the people are present in the video’) [Abstract, 0003, Fig. 1] (e.g., the system binds and displays an ‘identified / extracted name of a participant’ next to a video of that participant and outputs a recorded ‘set of facial frames of an individual bound to their name’, for example) [0043-0048, Figs. 2 & 6] (e.g., For each {video} frame, a goal can be to identify ‘bounding boxes’ for ‘faces’ as well as ‘text’ of a participant's name. This can be implemented with different neural network models for each of text detection and face detection. The detected text and faces can be associated by identifying a face box among face boxes that is the shortest distance from a given text box. An output can be recorded facial frames of an individual bound to their name…One step in binding names to faces can be identifying a name, such as in Text detection 204) [0043-0046; Fig. 2], the system comprising:
a processor (Shepherd: e.g., Processing Unit_1204) [0152, Fig. 12]; and
a memory storage device storing instructions thereon (Shepherd: e.g., System Memory 1206) [0152, Fig. 12], which, when executed by the processor, cause the system to perform operations comprising:
receiving, over a network, a first video stream from a client computing device of a first meeting participant (Shepherd: e.g., the system can determine, from a ‘video of an online session’ {video stream}, respective bounding boxes of text names of people, wherein the ‘text names’ are presented in the video, and wherein ‘images of the people’ {meeting participants} are present in the video. The system can determine, from the video, respective ‘faces’ of the people) [0003], the first video stream representing content shared by the first meeting participant with one or more other meeting participants via of the online meeting
(Shepherd: e.g., ‘Session Screen-Share Recording’) [0061] [0068; Fig. 3] (e.g., Screen Share_408) [0071; Fig. 4] (e.g., Screen share 408 can comprise an image of one participant's computer screen that is shared with other participants in the online session. In some examples, Screen share 408 comprises ‘text’ which can be analyzed) [0072; Fig. 4];
using one or more object detection algorithms (Shepherd, e.g., via ‘Text detection 204) [0033-0036] to process the first video stream to detect one or more regions of interest, each detected region of interest depicting a collection of text (Shepherd: e.g., System architecture 200 comprises gallery recordings 202, Text detection 204, Facial object detection 206, position of boundary boxes 208, Position of ‘Region of Interest & Mask’ 210, associate text and faces 212, optical character recognition 214, and output 216…Gallery recordings 202 can be similar to online session video recording 108 of FIG. 1. A ‘recording’ can be processed in two ways — by Text detection 204 and by Facial object detection 206. Text detection 204 can identify ‘text’ in a recording, such as ‘text that identifies a participants' name’. Facial object detection 206 can ‘identify the face of a participant’ in the recording) [0033-0036, Figs. 2 & 3]; and
for each detected region of interest depicting a collection of text:
using one or more layout analysis algorithms to process the collection of text to identify a structure for the collection of text (Shepherd, e.g., discloses a system that can determine, from a video of an online session, respective ‘bounding boxes’ of text names of people, wherein the text names are presented in the video, and wherein images of the people are present in the video. The system can determine, from the video, respective faces of the people. The system can associate a first bounding box of the bounding boxes with a first face of the faces based on the first bounding box satisfying a function of distance with respect to the first face among the faces. The system can extract a ‘name’ {text} from the first bounding box via optical character recognition.) [Abstract, 0003] (e.g., System architecture 200 comprises gallery recordings 202, Text detection 204, facial object detection 206, position of boundary boxes 208, Position of region of interest and mask 210, Associate text and faces 212, Optical Character Recognition 214, and Output 216…Gallery recordings 202 can be similar to ‘Online session video recording’_108 of FIG. 1. A recording can be processed in two ways — by text detection 204 and by Facial object detection 206. Text detection 204 can identify text in a recording, such as text that identifies a participants' name. Facial object detection 206 can identify the face of a participant in the recording…Text detection 204 can output a ‘position of boundary boxes 208’ in frames of a video, such as a ‘bounding box’ for Name_304A of FIG. 3. A ‘position’ can, for example, identify coordinates within a two-dimensional space that identify the corners of the bounding box…Associate Text and Faces 212 can take ‘position’ of boundary boxes 208 and ‘position of Region of Interest’ and mask 210 to associate respective texts with respective faces. In some examples, this can be performed in a similar manner as the example of FIG. 6…Optical character recognition 214 can perform ‘optical character recognition’ on position of boundary boxes 208 to identify what text is found within those boundary boxes) [0031-0038; Figs. 2 & 6];
extracting one or more word groups from the collection of text based on the identified structure for the collection of text (Shepherd, e.g., discloses a system that can determine, from a video of an online session, respective ‘bounding boxes’ of text names of people, wherein the text names are presented in the video, and wherein images of the people are present in the video. The system can determine, from the video, respective faces of the people. The system can associate a first bounding box of the bounding boxes with a first face of the faces based on the first bounding box satisfying a function of distance with respect to the first face among the faces. The system can extract a ‘name’ {text} from the first bounding box via optical character recognition.) [Abstract, 0003-0004] (e.g., Optical character recognition 214 can perform ‘optical character recognition’ on position of boundary boxes 208 to ‘identify what text is found within those boundary boxes’) [0031-0038; Figs. 2 & 6] (e.g., Operation 910 depicts ‘extracting’ a name from the bounding box via optical character recognition. That is, optical character recognition can be applied to the video to determine what the name is in ‘text’) [0102; Fig. 9]; and
associating with each word group in the one or more word groups a timestamp indicating the time during the online meeting when the collection of text, from which the word group was extracted, was shared (Shepherd, e.g., Output 216 can comprise, for each time period, a gallery recording {metadata} & {time stamp, person name, region of interest coordinates, and mask coordinates}) [0059] (e.g., In some examples, Operation 1006 comprises determining a ‘group of bounding box’ information, respective bounding box information of the group of bounding box information comprising respective first ‘timestamps’, respective first sources, and respective coordinates within the video; and determining a group of face information, respective face information of the group of face information comprising respective second timestamps within the video, respective second coordinates within the video, and respective masks. In such examples, Operation 906 can comprise associating a first timestamp of the respective first timestamps of the group of bounding box information with a second timestamp of the respective second timestamps of the group of face information. That is, ‘certain information’ can be extracted from text detection of a video {i.e., ‘names’), other information can be extracted from facial detection of the video {i.e., faces), and then these two outputs can be combined to associate names with faces) [0120; Figs., 9 & 10] (e.g., Screen share 408 can comprise an image of one participant's computer screen that is shared with other participants in the online session. In some examples, Screen share 408 comprises ‘text’, which can be analyzed, and determined that it does not identify a participant's ‘name’…) [0072; Fig. 4.];
for each of the one or more word groups, integrating the word group with a text-based transcript generated from a speech-to-text algorithm (Shepherd, e.g., Virtual meetings (sometimes referred to as ‘online meetings’ or online sessions) provide an opportunity to garner information from these meetings. For example, there can be ‘voice to text transcription’ for the virtual meeting…) [0019], (Shepherd, e.g., Output 216 can comprise, for each time period, a gallery recording {metadata} & {‘time stamp’, ‘person name’, ‘region of interest coordinates’, and ‘mask coordinates’}) [0059].
But while Shepherd discloses substantial features of the invention as above, he does not expressly disclose the additional recited feature{s} of wherein the integration of the word group with corresponding spoken dialogue within the transcript based on the timestamp associated with the word group creates a chronological sequence of both verbal and non-verbal communications for the online meeting. Nonetheless, the feature{s} is/are expressly disclosed by Dinicola in a related endeavor.
Dinicola particularly discloses the additional recited feature{s} of ‘wherein the integration of the word group with corresponding spoken dialogue within the transcript based on the timestamp associated with the word group creates a chronological sequence of both verbal and non-verbal communications for the online meeting’ (Dinicola: e.g., In Conference transcript 200, illustrated in FIG. 2, four illustrative conference participants (210, 220, 230 and 240) are participating and, as each participant ‘speaks’ {‘verbal’ communication}, their speech recognized, for example, with the use of a ‘speech-to-text converter’ and logged in the ‘transcript’. In addition, there is an Emotion section 250 that summarizes one or more of the various ‘emotions and gestures’ {‘non-verbal’ communication} recognized as time proceeds through the video conference. The emotion section 250 can be participant-centric, and can also include motion and/or gesture information for a plurality of participants that may coincidently be performing the same gesture or experiencing the same emotion) [0067; Figs. 2-3].
It would thus be obvious to one of ordinary skill in the art before the effective date of the invention to modify and/or combine Shepherd’s invention with the above said additional feature, as expressly disclosed by Dinicola, for the motivation of providing a method for communicating via alternate (audible, textual and/or graphic) means for descriptions of verbal and/or non-verbal communications, and which also provides feedback to a presenter or speaker about non-verbal cues that they are exhibiting that they may want to be aware of [Dinicola: Abstract, 0004-0005, Figs. 1-3].
Further, while the combination of Shepherd and Dinicola discloses substantial features of the invention as above, they do not expressly disclose the additional recited feature{s} of responsive to a query from an end-user of a meeting analyzer service, generating a response based on the one or more word groups and respective timestamps, wherein the generating of the response involves providing as input to a generative language model a text- based prompt comprising an instruction derived from at least the query from the end-user. Nonetheless, the feature{s} is/are expressly disclosed by Master Ben-Dor in a related endeavor.
Master Ben-Dor particularly discloses the additional recited feature{s} of ‘responsive to a query from an end-user of a meeting analyzer service, generating a response based on the one or more word groups and respective timestamps, wherein the generating of the response involves providing as input to a generative language model a text- based prompt comprising an instruction derived from at least the query from the end-user’ (Master Ben-Dor: e.g., FIG. 4 is a diagram illustrating a graphical user interface (GUI) 400 displaying a Transcript 404 and an associated Chatbot 402 according to an embodiment. In some examples, the GUI 400 displayed on a system such as system 100 of FIG. 1 as described herein. The GUI 400 includes a meeting Chatbot 402 that enables a user of the GUI 400 to ask ‘queries’ about the associated transcript displayed in a transcript section 404. The transcript section 404 is configured to display some or all of the text data of the transcript (e.g., transcript 106) to the user. Further, the transcript section 404 may be configured to display specific portions of the transcript based on ‘questions’ asked to the chatbot 402 and/or based on references provided by the chatbot 402 in response to questions…At 406, the user of the GUI 400 asks a query 406, “What topics did the meeting cover?”. In response, the chatbot provides a list of topics, including topics #1, #2, and #3. It should be understood that, in real situations, the topic names may include specific descriptors of the topics, rather than identifying them by numbers as illustrated. In some examples, the Query 406 is provided to the system of the GUI 400 as a ‘natural language query’ and that natural language query is processed by a transcript query engine (e.g., transcript query engine 104) and the response from the chatbot 402 is based on a generate query response (e.g., query response 134) from the engine) [0071-0072; 0074-0075; Figs. 4 & 5].
It would thus be obvious to one of ordinary skill in the art before the effective date of the invention to modify the combination with the above said additional feature(s), as expressly disclosed by Master Ben-Dor, for the motivation of providing a computerized method for providing responses to ‘natural language queries’ associated with transcripts at least by searching multiple indexes [Master Ben-Dor: Abstract, 0013-015, Figs. 1 & 4-6].
Moreover, while the combination of Shepherd, Dinicola and Ben-Dor discloses substantial features of the invention above, they do not expressly disclose the additional recited feature{s} of the first video stream representing content presented via display or application interface and shared by the first meeting participant with one or more other meeting participants via a content sharing feature of the online meeting. Nonetheless, the feature{s} is/are expressly disclosed by Springer in a related endeavor.
Springer particularly discloses the additional recited feature{s} of the first video stream representing content presented via display or application interface and shared by the first meeting participant with one or more other meeting participants via a content sharing feature of the online meeting (Springer: e.g., expressly discloses as his invention a method and system for recording of a conference {multimedia meeting} that is selectively configured include ‘only certain conference content’. The content comprises at least one of: ‘audio content’ from one or more participant user devices connected to the conference, ‘camera-generated video content’ from the one or more participant user devices, or ‘screensharing video content’ from the one or more participant user devices. The Server receives, via the graphical user interface, a selection of a subset of the content to include in the recording of the conference. The server generates the recording of the conference according to the selection of the subset of the content ) [Abstract; Figs. 4-5 & 8-9] (e.g., During the conference, each participant in the conference may specify which ‘content’ (e.g., audio, video, and/or screensharing) or ‘parts of content’ (e.g., a ‘screenshared presentation file’, but not a ‘screenshared word processing document’) that are obtained from a computing device of the participant are to be stored in the recording ) [col 2, L15-26] [col 12, 8-30; Fig. 4-5] (e.g., At 504, the requesting user device 404, based on a user input obtained via the GUI, specifies the configuration for the conference recording. The configuration is transmitted to the conference server 402. The ‘configuration’ specifies content of the conference to be included in the conference recording {i.e., which devices' audio content, camera-generated video content, and/or ‘screensharing video’ content to include}. The content may include identifiers of one or a ‘combination’ of: devices (or user accounts) participating in the conference, programs (e.g., ‘word processor program’, ‘spreadsheet program’, ‘slideshow presentation program’, and the like) in the ‘Screensharing video’, objects (e.g., cars, people, animals, or plants) shown in the camera-generated video, or ‘keywords’ (e.g., “contract,” “tort,” or “lawsuit”) mentioned in the audio plus a threshold time period (e.g., 30 seconds) before or after the mention of the keyword. The programs, objects, and/or keywords may be identified using artificial intelligence techniques, for example, at least one of: speech recognition, speech-to-text processing, computer vision, or object recognition ) [col 12, L63 – col 13, L19; Fig. 5] [col 11, L3-31; Fig. 4 & 8-9].
It would thus be obvious to one of ordinary skill in the art before the effective date of the invention to modify the combination with the above said additional feature(s), as expressly disclosed by Springer, for the motivation of providing a method and system for the generation, storing and/or searching of online conferences / meetings over a network, and which provides functionality for or otherwise permit a user to ‘selectively’ generate and store a recording of a conference based on some, but not all, of the audio, camera-generated video, and/or ‘screensharing video’ from the conference presented [Springer: Abstract, col 1, L6-9 & col 1, L46 – col 2, L14, Figs. 4-5 & 8-9].
Claim(s) 9 recite(s) substantially the same limitations as claim 2, is distinguishable only by its statutory category (method), and accordingly rejected on the same basis.
Claim(s) 16 recite(s) substantially the same limitations as claim 2 and thus rejected on the same basis.
As per claims 8, 15, Shepherd discloses the system wherein integrating a word group from the one or more word groups with the text-based transcript for the online meeting comprises: for each word group in the one or more word groups extracted from a collection of text, inserting the word group into a text-based transcript for the online meeting in a position relative to other text, based on the time, during the online meeting, at which the collection of text from which the word group was extracted was shared by the first meeting participant (Shepherd, e.g., Virtual meetings (sometimes referred to as ‘online meetings’ or online sessions) provide an opportunity to garner information from these meetings. For example, there can be ‘voice to text transcription’ for the virtual meeting…) [0019] (e.g., Screen share 408 can comprise an image of one participant's computer screen that is shared with other participants in the online session. In some examples, Screen share 408 comprises ‘text’, which can be analyzed, and determined that it does not identify a participant's ‘name’…) [0072; Fig. 4.]; and annotating the word group to include i) the timestamp for the word group, and ii) information identifying the first meeting participant (Shepherd, e.g., Output 216 can comprise, for each time period, a gallery recording {metadata} & {‘time stamp’, ‘person name’, ‘region of interest coordinates’, and ‘mask coordinates’}) [0059]
Conclusion
Applicant’s amendment necessitated the new ground(s) of rejection presented in this Office Action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP 706.06(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to GLENFORD J MADAMBA whose telephone number is (571)272-7989. The examiner can normally be reached on Mondays-Fridays, 9am-5pm.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Christopher Parry can be reached on 571-272-8328. The fax phone number for the organization where this application or proceeding is assigned is 703-872-9306.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free).
/GLENFORD J MADAMBA/Primary Examiner, Art Unit 2451