Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
Claim Objections
Claims 19-20 are objected to because of the following informalities: claims 18 and 19 should be depending on their independent claim 15. Appropriate correction is required.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 1-20 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Radkowski et al (2025/0117997).
Consider claims 1, 8 and 15, Radkowski et al teach an information handling system, method and videoconferencing web camera executing computer readable code instructions of a dynamic presentation area captured image normalizing system comprising: a network interface device to transceive video and audio data pursuant to a hardware processor executing computer readable code instructions of a videoconferencing software application during a videoconferencing session (par. 0035; “For example, during a video conference meeting, the communication device 110 may be used to communicate a video feed that includes one or more virtual whiteboard images through a network to remote computer devices controlled by other participants of the video conference meeting. The communication device 110 may include hardware circuitry for establishing a network connection to remote computer devices. The hardware circuitry may include a router, a modem, a transceiver, and/or the like. Optionally, the communication device 110 may include transceiving circuitry, one or more antennas, and/or the like for wireless communication”; par. 0038; “The user computer device 202 may be communicatively connected to the remote computer devices 204 and the servers 206 via a network 208. The network 208 may be the Internet, a local area network (LAN), a cellular network, or the like”); a camera operatively coupled to the hardware processor to capture from a video feed a plurality of raw images of a viewing area including an identified presentation area at a planar presentation surface in the viewing area where user-supplied presentation material is added by a user during the videoconferencing session (par. 0031; “The virtual whiteboard system 100 also includes one or more tangible and non-transitory computer-readable storage media (e.g., data storage devices), referred to herein as memory 104. The memory 104 may store programmed instructions (e.g., software) that are executed by the one or more processors 102 to perform the operations described herein. For example, the program instructions stored in the memory 104 may be executable by the one or more processors 102 to analyze image data generated by a camera and detect a drawing board in the image data”; par. 0034; “The camera 106 is an optical sensor that generates (e.g., captures) image data representative of subject matter within a field of view of the camera 106 at the time that the image data is generated. The image data may be a series of image frames generated over time. The series of image frames may be a video. In an embodiment, the camera 106 may be generally fixed in place during use, such as mounted to a user computer device which is stationary on a desk while the user computer device is operated. For example, the camera 106 may be a webcam that is oriented to face toward a drawing board in a room. The field of view may encompass the drawing board and also a user that is present in front of or next to the drawing board”); a hardware processor to execute computer readable code instructions of the dynamic presentation area captured image normalizing system to identify the identified presentation area in a first raw image of the viewing area (par. 0044-0045; “In an embodiment, the board detection feature may be active during a video conference meeting. For example, a setting may dictate that the processor 102 actively analyzes the image data generated by the camera 106 over time during a video conference meeting to detect the presence of the drawing board 304 in one or more image frames”; “the board detection algorithm 112 may be an object detector that is trained or designed to detect a drawing board in an image as a region of interest”); the hardware processor to execute computer readable code instructions of the dynamic presentation area captured image normalizing system to identify applied user-supplied presentation material within the identified presentation area of a second raw image of the viewing area (par. 0045; “The characteristics of the frame 410 may also be used by the processor 102 to delineate a search area in which to search for user-based content displayed on the surface 306 of the drawing board 304. For example, any text or other graphical symbols drawn by the user onto the surface 306 of the drawing board 304, as well as tokens applied on the surface 306, would be depicted in the portion of the image data that is within the surface area bounded by the frame 410. The board detection algorithm 112 may use an expected color of the drawing board 304 to assist with identifying the drawing board 304 in the image data and differentiating from other objects that may have similar shapes (e.g., other rectangular objects). For example, the board detection algorithm 112 may use a whiteboard color distribution, shape information of the whiteboard, and a size of the whiteboard to identify the area as a whiteboard (labeling the area as a whiteboard) to be able to distinguish it from other rectangular areas in the image (e.g., a painting). The algorithm 112 may be probabilistic, meaning that it may also work if the whiteboard is partially covered/obstructed and/or discolored (e.g., not clean)”); the hardware processor to execute code instructions to determine boundaries of a skewed polygon enclosing the applied user-supplied presentation material (par. 0045; “The processor 102 may analyze the region of interest for edges 408 of the drawing board 304 based on pixel contours. The processor 102 may assemble the edges 408 into a frame 410 that represents an outer boundary of the drawing board 304. The frame 410 is rectangular in FIG. 4, and is skewed at an angle in the image. The frame 410 may have a different shape in other embodiments. The drawing board 304 may be described or identified by coordinate points at the four corners of the frame or outer boundary 410. The processor 102 may use the characteristics (e.g., corner coordinates, angles between edges 408, etc.) of the frame 410 for rectifying the image data that is received to form a virtual whiteboard image”); the hardware processor to execute code instructions to translate the skewed polygon and the applied user-supplied presentation material to orthogonally-adjusted user-supplied presentation material within a rectangular polygon by proportionally reorienting the applied user-supplied presentation material pixels from the second raw image across the rectangular polygon to appear as orthogonally oriented (Fig. 3; fig. 8; par. 0029; “The image data may be modified by a computer by rectifying the image data (e.g., to modify an orientation of the drawing board and/or the content thereon), augmenting and/or enhancing the depicted content on the drawing board, rendering the image data for display on a display device, and/or the like”; par. 0068; “The virtual whiteboard image 316 may be a modified version of the input image 302. For example, the virtual whiteboard image 316 may have edits and/or augmentations based on user commands provided by the user selectively displaying graphical symbols on the surface 306 of the drawing board 304. In another example, handwritten text shown in the virtual whiteboard image 316 may be enhanced via the sharpening function 312 to be more legible to remote observers that receive the virtual whiteboard image 316. The content on the virtual whiteboard image 316 may have improved clarity and resolution due to the motion tracking and stabilizing function 314”); and the network interface device to transmit an orthogonal view image of the orthogonally-adjusted user-supplied presentation material to a remote participant in the videoconferencing session (par. 0068; “The virtual whiteboard image 316 may be communicated to one or more remote computer devices 204 to allow one or more remote participants of a video conference meeting to view the virtual whiteboard image”).
Consider claims 2 and 16, Radkowski et al teach wherein a third raw image of the viewing area including updated applied user-supplied presentation material changed by a presenter is captured and the hardware processor executes code instructions to translate the skewed polygon and the updated applied user-supplied presentation material from the third raw image of the viewing area to generate updated orthogonally-adjusted user-supplied presentation material for presentation to the remote participant (par. 0063; “For example, once the filter parameters are selected for the first character, the processor 102 may automatically use those filter parameters when detecting additional instances of the first character in the handwritten text 802. Optionally, a sequence alignment algorithm (e.g., Neddleman-Wunsch Score) may be applied on the OCR-recognized text 804 by the processor 102 to identify text similarities. The sequence alignment algorithm may determine (i) if text is new, (ii) has been updated, or (iii) has already been processed”).
Consider claims 3 and 17, Radkowski et al teach wherein the applied user-supplied presentation material includes hand writing applied to the planar presentation surface in the viewing area which is identified by execution of an image recognition algorithm of the dynamic presentation area captured image normalizing system (par. 0059; “once the processor 102 detects the drawing board 304 in the image data, the processor 102 in an embodiment performs a sharpening function 312. The sharpening function 312 may be performed after the symbol detection function 310, before the symbol detection function 310, or concurrently with the symbol detection function 310. The sharpening function 312 is designed to clarify and enhance handwritten content using OCR-based text recognition and inpainting techniques. For example, the inpainting techniques may be rendering techniques that augment the original character content handwritten by the user to enhance the characters and make the characters more legible, standardized in shape, and/or have a greater contrast with the surrounding area of the board 304”).
Consider claims 4, 12 and 18, Radkowski et al teach further comprising: the hardware processor to execute code instructions of an image-differencing algorithm in the dynamic presentation area captured image normalizing system to subtract the second raw image from the first raw image to remove the visual area of the raw images leaving the applied user-supplied presentation material in a composite image (par. 0048; “In an embodiment, the virtual whiteboard system 100 defines associated relationships between some specific graphical symbols and commands. The relationships may be pre-defined as default settings. Optionally, the virtual whiteboard system 100 may enable a user to modify relationships and/or add new relationships for customization. The commands represent user-instructed operations to be performed by the processor 102. The operations may affect how content is displayed on a virtual whiteboard display interface to remote users that are not present in the room and that view the drawing board 304 via their remote computer device 204. For example, some commands may represent editing tasks to be performed on the content that is displayed on the virtual whiteboard display interface. By performing an editing task, the content that is shown on a virtual whiteboard image to a remote user may differ from the actual content that is displayed on the surface 306 of the drawing board 304”).
Consider claims 5 and 13, Radkowski et al teach further comprising: the hardware processor to execute code instructions of an editing algorithm in the dynamic presentation area captured image normalizing system to remove an identified object other than the user-supplied presentation material (par. 0048; “In an embodiment, the virtual whiteboard system 100 defines associated relationships between some specific graphical symbols and commands. The relationships may be pre-defined as default settings. Optionally, the virtual whiteboard system 100 may enable a user to modify relationships and/or add new relationships for customization. The commands represent user-instructed operations to be performed by the processor 102. The operations may affect how content is displayed on a virtual whiteboard display interface to remote users that are not present in the room and that view the drawing board 304 via their remote computer device 204. For example, some commands may represent editing tasks to be performed on the content that is displayed on the virtual whiteboard display interface. By performing an editing task, the content that is shown on a virtual whiteboard image to a remote user may differ from the actual content that is displayed on the surface 306 of the drawing board 304”).
Consider claims 6, 14 and 19, Radkowski et al teach further comprising: the network interface device to transmit the orthogonal view image of the orthogonally-adjusted user-supplied presentation material to the remote participant with the video feed of the videoconferencing session (par. 0035; “For example, during a video conference meeting, the communication device 110 may be used to communicate a video feed that includes one or more virtual whiteboard images through a network to remote computer devices controlled by other participants of the video conference meeting. The communication device 110 may include hardware circuitry for establishing a network connection to remote computer devices. The hardware circuitry may include a router, a modem, a transceiver, and/or the like. Optionally, the communication device 110 may include transceiving circuitry, one or more antennas, and/or the like for wireless communication”).
Consider claims 7 and 20, Radkowski et al teach further comprising: the network interface device to transmit the orthogonal view image of the orthogonally-adjusted user-supplied presentation material to the remote participant after the videoconferencing session ends (par. 0006; 0027; “In an example use application, the system and method may be used during video conference meetings (e.g., calls), although the virtual whiteboard system and method disclosed herein are not limited to video conferencing applications. For example, the virtual whiteboard image generated by implementing a command based on user content applied to the drawing board can be recorded in a memory device, communicated to a remote computer device outside of a video conference meeting (such as in an email), and/or the like”).
Consider claim 9, Radkowski et al teach further comprising: generating, via the hardware processor, orthogonally-adjusted first user-supplied presentation material by translating the first user-supplied presentation material within the first polygon by proportionally reorienting first user-supplied presentation material pixels of the first user-supplied presentation material across the rectangular polygon to appear as orthogonally oriented to yield the orthogonally-adjusted first user-supplied presentation material (par. 0029; “The virtual whiteboard may be generated by modifying image data received that depicts a physical drawing board in a room. The virtual whiteboard may present an altered version of the content that is actually displayed on the drawing board, as captured in image data by a camera. The image data may be modified by a computer by rectifying the image data (e.g., to modify an orientation of the drawing board and/or the content thereon), augmenting and/or enhancing the depicted content on the drawing board, rendering the image data for display on a display device, and/or the like”) prior to capturing the second raw image(par. 0045; “The characteristics of the frame 410 may also be used by the processor 102 to delineate a search area in which to search for user-based content displayed on the surface 306 of the drawing board 304. For example, any text or other graphical symbols drawn by the user onto the surface 306 of the drawing board 304, as well as tokens applied on the surface 306, would be depicted in the portion of the image data that is within the surface area bounded by the frame 410. The board detection algorithm 112 may use an expected color of the drawing board 304 to assist with identifying the drawing board 304 in the image data and differentiating from other objects that may have similar shapes (e.g., other rectangular objects). For example, the board detection algorithm 112 may use a whiteboard color distribution, shape information of the whiteboard, and a size of the whiteboard to identify the area as a whiteboard (labeling the area as a whiteboard) to be able to distinguish it from other rectangular areas in the image (e.g., a painting). The algorithm 112 may be probabilistic, meaning that it may also work if the whiteboard is partially covered/obstructed and/or discolored (e.g., not clean)”).
Consider claim 10, Radkowski et al teach wherein the second user-supplied presentation material is an expansion of and includes the first user-supplied presentation material (par. 0045; “The characteristics of the frame 410 may also be used by the processor 102 to delineate a search area in which to search for user-based content displayed on the surface 306 of the drawing board 304. For example, any text or other graphical symbols drawn by the user onto the surface 306 of the drawing board 304, as well as tokens applied on the surface 306, would be depicted in the portion of the image data that is within the surface area bounded by the frame 410. The board detection algorithm 112 may use an expected color of the drawing board 304 to assist with identifying the drawing board 304 in the image data and differentiating from other objects that may have similar shapes (e.g., other rectangular objects). For example, the board detection algorithm 112 may use a whiteboard color distribution, shape information of the whiteboard, and a size of the whiteboard to identify the area as a whiteboard (labeling the area as a whiteboard) to be able to distinguish it from other rectangular areas in the image (e.g., a painting). The algorithm 112 may be probabilistic, meaning that it may also work if the whiteboard is partially covered/obstructed and/or discolored (e.g., not clean)”).
Consider claim 11, Radkowski et al teach further comprising: transmitting, via a network interface device, the orthogonally-adjusted second user-supplied presentation material to a remote participant in the videoconferencing session (par. 0068; “The virtual whiteboard image 316 may be communicated to one or more remote computer devices 204 to allow one or more remote participants of a video conference meeting to view the virtual whiteboard image”).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Any response to this action should be mailed to:
Mail Stop ____(explanation, e.g., Amendment or After-final, etc.) Commissioner for Patents
P.O. Box 1450
Alexandria, VA 22313-1450
Facsimile responses should be faxed to:
(571) 273-8300
Hand-delivered responses should be brought to:
Customer Service Window
Randolph Building
401 Dulany Street
Alexandria, VA 22314
Any inquiry concerning this communication or earlier communications from the examiner should be directed to QUOC DUC TRAN whose telephone number is (571) 272-7511. The examiner can normally be reached Monday-Friday 8:30am - 5pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Duc Nguyen can be reached on (571) 272-7503. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/Quoc D Tran/
Primary Examiner, Art Unit 2691
January 21, 2026