R has a somewhat special object-oriented ecosystem in which methods belong not to classes but instead to generic functions. This can be seen as a more functional object-oriented framework that plays really well with the language. The logic of both main class frameworks, S3 and S4 (I think also the forthcoming S7) makes so much sense for all those functions that one could expect to work on a variety, if not every type of object, such as plot
, print
or summary
. However, when a function is heavily tailored at a specific kind of object one can easily end up writing a generic and default method only to get to the actual class method of interest. Something in this line:
foo <- function (...) {
structure(
list(...),
class = 'foo'
)
}
bar <- function (x, ...) {
UseMethod('bar')
}
bar.default <- function (x) {
cat('nothing to do here, just a formality...\n')
invisible()
}
bar.foo <- function (x) {
# really interesting stuff done with 'foo'
}
There are lots of cases in between these two extremes, where the method one wants to define could clearly make sense for other classes or for base types, but it's not clear to me how to exactly define the point where this stops making sense and it's better to just write a simple function that expects a specific kind of object.
I've been writing a package for a really specific domain (geochronology) that as such is well suited for class definitions and methods. I currently have a mixture of properly defined generics + methods and also non-generic functions that will only accept an object of a certain class as input. I'm starting to fear this will come back to bite me sometime in the future and would love to hear some thoughts on the topic from more experienced R programmers in this collective.